Info-gap decision theory

Info-gap decision theory is a non-probabilistic decision theory that seeks to optimize robustness to failure – or opportuneness for windfall – under severe uncertainty,[1][2] in particular applying sensitivity analysis of the stability radius type[3] to perturbations in the value of a given estimate of the parameter of interest. It has some connections with Wald's maximin model; some authors distinguish them, others consider them instances of the same principle.

It has been developed since the 1980s by Yakov Ben-Haim,[4] and has found many applications and described as a theory for decision-making under "severe uncertainty". It has been criticized as unsuited for this purpose, and alternatives proposed, including such classical approaches as robust optimization.

Contents

Summary

Info-gap is a decision theory: it seeks to assist in decision-making under uncertainty. It does this by using 3 models, each of which builds on the last. One begins with a model for the situation, where some parameter or parameters are unknown. One then takes an estimate for the parameter, which is assumed to be substantially wrong, and one analyzes how sensitive the outcomes under the model are to the error in this estimate.

Uncertainty model
Starting from the estimate, an uncertainty model measures how distant other values of the parameter are from the estimate: as uncertainty increases, the set of possible values increase – if one is this uncertain in the estimate, what other parameters are possible?
Robustness/opportuneness model
Given an uncertainty model and a minimum level of desired outcome, then for each decision, how uncertain can you be and be assured achieving this minimum level? (This is called the robustness of the decision.) Conversely, given a desired windfall outcome, how uncertain must you be for this desirable outcome to be possible? (This is called the opportuneness of the decision.)
Decision-making model
To decide, one optimizes either the robustness or the opportuneness, on the basis of the robustness or opportuneness model. Given a desired minimum outcome, which decision is most robust (can stand the most uncertainty) and still give the desired outcome (the robust-satisficing action)? Alternatively, given a desired windfall outcome, which decision requires the least uncertainty for the outcome to be achievable (the opportune-windfalling action)?

Models

Info-gap theory models uncertainty \alpha (the horizon of uncertainty) as nested subsets \mathcal{U}(\alpha, \tilde{u}) around a point estimate \tilde{u} of a parameter: with no uncertainty, the estimate is correct, and as uncertainty increases, the subset grows, in general without bound. The subsets quantify uncertainty – the horizon of uncertainty measures the "distance" between an estimate and a possibility – providing an intermediate measure between a single point (the point estimate) and the universe of all possibilities, and giving a measure for sensitivity analysis: how uncertain can an estimate be and a decision (based on this incorrect estimate) still yield an acceptable outcome – what is the margin of error?

Info-gap is a local decision theory, beginning with an estimate and considering deviations from it; this contrasts with global methods such as minimax, which considers worst-case analysis over the entire space of outcomes, and probabilistic decision theory, which considers all possible outcomes, and assigns some probability to them. In info-gap, the universe of possible outcomes under consideration is the union of all of the nested subsets: \mathfrak{U}�:= \bigcup_\alpha \mathcal{U}(\alpha, \tilde{u}).

Info-gap analysis gives answers to such questions as:

It can be used for satisficing, as an alternative to optimizing in the presence of uncertainty or bounded rationality; see robust optimization for an alternative approach.

Comparison with classical decision theory

In contrast to probabilistic decision theory, info-gap analysis does not use probability distributions: it measures the deviation of errors (differences between the parameter and the estimate), but not the probability of outcomes – in particular, the estimate \tilde{u} is in no sense more or less likely than other points, as info-gap does not use probability. Info-gap, by not using probability distributions, is robust in that it is not sensitive to assumptions on probabilities of outcomes. However, the model of uncertainty does include a notion of "closer" and "more distant" outcomes, and thus includes some assumptions, and is not as robust as simply considering all possible outcomes, as in minimax. Further, it considers a fixed universe \mathfrak{U}, so it is not robust to unexpected (not modeled) events.

The connection to minimax analysis has occasioned some controversy: (Ben-Haim 1999, pp. 271–2) argues that info-gap's robustness analysis, while similar in some ways, is not minimax worst-case analysis, as it does not evaluate decisions over all possible outcomes, while (Sniedovich, 2007) argues that the robustness analysis can be seen as an example of maximin (not minimax), applied to maximizing the horizon of uncertainty. This is discussed in criticism, below, and elaborated in the classical decision theory perspective.

Basic example: budget

As a simple example, consider a worker with uncertain income. They expect to make $100 per week, while if they make under $60 they will be unable to afford lodging and will sleep in the street, and if they make over $150 they will be able to afford a night's entertainment.

Using the info-gap absolute error model:


\mathcal{U}(\alpha, {\tilde{u}}) = \left \{ u�: \ 
|u - {\tilde{u}} | \le \alpha \right \} , \qquad \alpha \ge 0

where \tilde u = \$100, one would conclude that the worker's robustness function \hat\alpha is $40, and their opportuneness function \hat\beta is $50: if they are certain that they will make $100, they will neither sleep in the street nor feast, and likewise if they make within $40 of $100. However, if they erred in their estimate by more than $40, they may find themselves on the street, while if they erred by more than $50, they may find themselves in clover.

As stated, this example is only descriptive, and does not enable any decision making – in applications, one considers alternative decision rules, and often situations with more complex uncertainty.

Consider now the worker thinking of moving to a different town, where the work pays less but lodgings are cheaper. Say that here they estimate that they will earn $80 per week, but lodging only costs $44, while entertainment still costs $150. In that case the robustness function will be $36, while the opportuneness function will be $70. If they make the same errors in both cases, the second case (moving) is both less robust and less opportune.

On the other hand, if one measures uncertainty by relative error, using the fractional error model:


\mathcal{U}(\alpha, {\tilde{u}}) = \left \{ u�: \ 
|u - {\tilde{u}} | \le \alpha \tilde u \right \} , \qquad \alpha \ge 0

in the first case robustness is 40% and opportuneness is 50%, while in the second case robustness is 45% and opportuneness is 87.5%, so moving is more robust and less opportune.

This example demonstrates the sensitivity of analysis to the model of uncertainty.

Info-gap models

Info-gap can be applied to spaces of functions; in that case the uncertain parameter is a function u(x), with estimate {\tilde u}(x), and the nested subsets are sets of functions. One way to describe such a set of functions is by requiring values of u to be close to values of {\tilde u} for all x, using a family of info-gap models on the values.

For example, the above fraction error model for values becomes the fractional error model for functions by adding a parameter x to the definition:


\mathcal{U}(\alpha, {\tilde{u}}) = \left \{ u(x): \ 
|u(x) - {\tilde{u}}(x) | \le \alpha {\tilde{u}}(x), \ \mbox{for all}\ x \in X \right \} , \ \ \ \alpha \ge 0.

More generally, if U(\alpha,y) is a family of info-gap models of values, then one obtains an info-gap model of functions in the same way:


\mathcal{U}(\alpha, {\tilde{u}}) = \left \{ u(x): \ 
u(x) \in U(\alpha,{\tilde{u}}(x)), \ \mbox{for all}\ x \in X \right \} , \ \ \ \alpha \ge 0.

Motivation

It is common to make decisions under uncertainty.[note 1] What can be done to make good (or at least the best possible) decisions under conditions of uncertainty? Info-gap robustness analysis evaluates each feasible decision by asking: how much deviation from an estimate of a parameter value, function, or set, is permitted and yet "guarantee" acceptable performance? In everyday terms, the "robustness" of a decision is set by the size of deviation from an estimate that still leads to performance within requirements when using that decision. It is sometimes difficult to judge how much robustness is needed or sufficient. However, according to info-gap theory, the ranking of feasible decisions in terms of their degree of robustness is independent of such judgments.

Info-gap theory also proposes an opportuneness function which evaluates the potential for windfall outcomes resulting from favorable uncertainty.

Example: resource allocation

Here is an illustrative example, which will introduce the basic concepts of information gap theory. More rigorous description and discussion follows.

Resource allocation

Suppose you are a project manager, supervising two teams: red team and blue team. Each of the teams will yield some revenue at the end of the year. This revenue depends on the investment in the team – higher investments will yield higher revenues. You have a limited amount of resources, and you wish to decide how to allocate these resources between the two groups, so that the total revenues of the project will be as high as possible.

If you have an estimate of the correlation between the investment in the teams and their revenues, as illustrated in Figure 1, you can also estimate the total revenue as a function of the allocation. This is exemplified in Figure 2 – the left-hand side of the graph corresponds to allocating all resources to the red team, while the right-hand side of the graph corresponds to allocating all resources to the blue team. A simple optimization will reveal the optimal allocation – the allocation that, under your estimate of the revenue functions, will yield the highest revenue.

Introducing uncertainty

However, this analysis does not take uncertainty into account. Since the revenue functions are only a (possibly rough) estimate, the actual revenue functions may be quite different. For any level of uncertainty (or horizon of uncertainty) we can define an envelope within which we assume the actual revenue functions are. Higher uncertainty would correspond to a more inclusive envelope. Two of these uncertainty envelopes, surrounding the revenue function of the red team, are represented in Figure 3. As illustrated in Figure 4, the actual revenue function may be any function within a given uncertainty envelope. Of course, some instances of the revenue functions are only possible when the uncertainty is high, while small deviations from the estimate are possible even when the uncertainty is small.

These envelopes are called info-gap models of uncertainty, since they describe one's understanding of the uncertainty surrounding the revenue functions.

From the info-gap models (or uncertainty envelopes) of the revenue functions, we can determine an info-gap model for the total amount of revenues. Figure 5 illustrates two of the uncertainty envelopes defined by the info-gap model of the total amount of revenues.

Robustness

Now, assume that as a project manager, high revenues will earn you the senior management's respect, but if the total revenues are below some threshold, it will mean your job. We will define such a threshold as a critical revenue, since total revenues beneath the critical revenue will be considered as failure.

For any given allocation, the robustness of the allocation, with respect to the critical revenue, is the maximal uncertainty that will still guarantee that the total revenue will exceed the critical revenue. This is demonstrated in Figure 6. If the uncertainty will increase, the envelope of uncertainty will become more inclusive, to include instances of the total revenue function that, for the specific allocation, yields a revenue smaller than the critical revenue.

The robustness measures the immunity of a decision to failure. A robust satisficer is a decision maker that prefers choices with higher robustness.

If, for some allocation q, we will illustrate the correlation between the critical revenue and the robustness, we will have a graph somewhat similar to Figure 7. This graph, called robustness curve of allocation q, has two important features, that are common to (most) robustness curves:

  1. The curve is non-increasing. This captures the notion that when we have higher requirements (higher critical revenue), we are less immune to failure (lower robustness). This is the tradeoff between quality and robustness.
  1. At the nominal revenue, that is, when the critical revenue equals the revenue under the nominal model (our estimate of the revenue functions), the robustness is zero. This is since a slight deviation from the estimate may decrease the total revenue.

If we compare the robustness curves of two allocations, q and q', it is not uncommon that the two curves will intersect, as illustrated in Figure 8. In this case, none of the allocations is strictly more robust than the other: for critical revenues smaller than the crossing point, allocation q' is more robust than allocation q, while the other way around holds for critical revenues higher than the crossing point. That is, the preference between the two allocations depends on the criterion of failure – the critical revenue.

Opportuneness

Suppose, in addition to the threat of losing your job, the senior management offers you a carrot: if the revenues are higher than some revenue, you will be awarded a considerable bonus. Although revenues lower than this revenue will not be considered to be a failure (as you may still keep your job), a higher revenue will be considered a windfall success. We will therefore denote this threshold by windfall revenue.

For any given allocation, the opportuneness of the allocation, with respect to the critical revenue, is the minimal uncertainty for which it is possible for the total revenue to exceed the critical revenue. This is demonstrated in Figure 9. If the uncertainty will decrease, the envelope of uncertainty will become less inclusive, to exclude all instances of the total revenue function that, for the specific allocation, yields a revenue higher than the windfall revenue.

The opportuneness may be considered as the immunity to windfall success. Therefore, lower opportuneness is preferred to higher opportuneness.

If, for some allocation q, we will illustrate the correlation between the windfall revenue and the robustness, we will have a graph somewhat similar to Figure 10. This graph, called opportuneness curve of allocation q, has two important features, that are common to (most) opportuneness curves:

  1. The curve is non-decreasing. This captures the notion that when we have higher requirements (higher windfall revenue), we are more immune to failure (higher opportuneness, which is less desirable). That is, we need a more substantial deviation from the estimate in order to achieve our ambitious goal. This is the tradeoff between quality and opportuneness.
  2. At the nominal revenue, that is, when the critical revenue equals the revenue under the nominal model (our estimate of the revenue functions), the opportuneness is zero. This is since no deviation from the estimate is needed in order to achieve the windfall revenue.

Treatment of severe uncertainty

The logic underlying the above illustration is that the (unknown) true revenue is somewhere in the immediate neighborhood of the (known) estimate of the revenue. For if this is not the case, what is the point of conducting the analysis exclusively in this neighborhood?

Therefore, to remind ourselves that info-gap's manifest objective is to seek robust solutions for problems that are subject to severe uncertainty, it is instructive to exhibit in the display of the results also those associated with the true value of the revenue. Of course, given the severity of the uncertainty we do not know the true value.

What we do know, however, is that according to our working assumptions the estimate we have is a poor indication of the true value of the revenue and is likely to be substantially wrong. So, methodologically speaking, we have to display the true value at a distance from its estimate. In fact, it would be even more enlightening to display a number of possible true values .

In short, methodolocially speaking the picture is this:

Note that in addition to the results generated by the estimate, two "possible" true values of the revenue are also displayed at a distance from the estimate.

As indicated by the picture, since info-gap robustness model applies its Maximin analysis in an immediate neighborhood of the estimate, there is no assurance that the analysis is in fact conducted in the neighborhood of the true value of the revenue. In fact, under conditions of severe uncertainty this—methodologically speaking—is very unlikely.

This raises the question: how valid/useful/meaningful are the results? Aren't we sweeping the severity of the uncertainty under the carpet?

For example, suppose that a given allocation is found to be very fragile in the neighborhood of the estimate. Does this means that this allocation is also fragile elsewhere in the region of uncertainty? Conversely, what guarantee is there that an allocation that is robust in the neighborhood of the estimate is also robust elsewhere in the region of uncertainty, indeed in the neighborhood of the true value of the revenue?

More fundamentally, given that the results generated by info-gap are based on a local revenue/allocation analysis in the neighborhood of an estimate that is likely to be substantially wrong, we have no other choice—methodologically speaking—but to assume that the results generated by this analysis are equally likely to be substantially wrong. In other words, in accordance with the universal Garbage In - Garbage Out Axiom, we have to assume that the quality of the results generated by info-gap's analysis is only as good as the quality of the estimate on which the results are based.

The picture speaks for itself.

What emerges then is that info-gap theory is yet to explain in what way, if any, it actually attempts to deal with the severity of the uncertainty under consideration. Subsequent sections of this article will address this severity issue and its methodological and practical implications.

A more detailed analysis of an illustrative numerical investment problem of this type can be found in Sniedovich (2007).

Uncertainty models

Info-gaps are quantified by info-gap models of uncertainty. An info-gap model is an unbounded family of nested sets. For example, a frequently encountered example is a family of nested ellipsoids all having the same shape. The structure of the sets in an info-gap model derives from the information about the uncertainty. In general terms, the structure of an info-gap model of uncertainty is chosen to define the smallest or strictest family of sets whose elements are consistent with the prior information. Since there is, usually, no known worst case, the family of sets may be unbounded.

A common example of an info-gap model is the fractional error model. The best estimate of an uncertain function u(x)\!\, is {\tilde{u}}(x), but the fractional error of this estimate is unknown. The following unbounded family of nested sets of functions is a fractional-error info-gap model:


\mathcal{U}(\alpha, {\tilde{u}}) = \left \{ u(x): \ 
|u(x) - {\tilde{u}}(x) | \le \alpha {\tilde{u}}(x), \ \mbox{for all}\ x \right \} , \ \ \ \alpha \ge 0

At any horizon of uncertainty \alpha, the set \mathcal{U}(\alpha, {\tilde{u}}) contains all functions u(x)\!\, whose fractional deviation from {\tilde{u}}(x) is no greater than \alpha. However, the horizon of uncertainty is unknown, so the info-gap model is an unbounded family of sets, and there is no worst case or greatest deviation.

There are many other types of info-gap models of uncertainty. All info-gap models obey two basic axioms:


\mathcal{U}(\alpha, {\tilde{u}}) \ \subseteq \ \mathcal{U}(\alpha^\prime, {\tilde{u}})

\mathcal{U}(0,{\tilde{u}}) = \{ {\tilde{u}} \}

The nesting axiom imposes the property of "clustering" which is characteristic of info-gap uncertainty. Furthermore, the nesting axiom implies that the uncertainty sets \mathcal{U}(\alpha, u) become more inclusive as \alpha grows, thus endowing \alpha with its meaning as an horizon of uncertainty. The contraction axiom implies that, at horizon of uncertainty zero, the estimate {\tilde{u}} is correct.

Recall that the uncertain element u may be a parameter, vector, function or set. The info-gap model is then an unbounded family of nested sets of parameters, vectors, functions or sets.

Sublevel sets

For a fixed point estimate \tilde{u}, an info-gap model is often equivalent to a function \phi\colon \mathfrak{U} \to [0,%2B\infty) defined as:

\phi(u)�:= \min \{\alpha \mid u \in \mathcal{U}(\alpha,{\tilde{u}}) \}

meaning "the uncertainty of a point u is the minimum uncertainty such that u is in the set with that uncertainty". In this case, the family of sets \mathcal{U}(\alpha, \tilde{u}) can be recovered as the sublevel sets of \phi:

\mathcal{U}(\alpha, \tilde{u})�:= \phi^{-1}([0,\alpha])

meaning: "the nested subset with horizon of uncertainty \alpha consists of all points with uncertainty less than or equal to \alpha".

Conversely, given a function \phi\colon \mathfrak{U} \to [0,%2B\infty), satisfying the axiom \phi^{-1}(0) = \{\tilde{u}\} (equivalently, \phi(u) = 0 if and only if u = \tilde{u}), it defines an info-gap model via the sublevel sets.

For instance, if the region of uncertainty is a metric space, then the uncertainty function can simply be the distance, \phi(u)�:= d(\tilde{u},u), so the nested subsets are simply

\mathcal{U}(\alpha, \tilde{u}) = \{ u \mid d(\tilde{u},u) \leq \alpha \}.

This always defines an info-gap model, as distances are always non-negative (axiom of non-negativity), and satisfies \phi^{-1}(0) = \{\tilde{u}\} (info-gap axiom of contraction) because the distance between two points is zero if and only if they are equal (the identity of indiscernibles); nesting follows by construction of sublevel set.

Not all info-gap models arise as sublevel sets: for instance, if u_1 \in \mathcal{U}(\alpha, \tilde{u}) for all \alpha > 1, but not for \alpha = 1 (it has uncertainty "just more" than 1), then the minimum above is not defined; one can replace it by an infimum, but then the resulting sublevel sets will not agree with the infogap model: u_1 \in \phi^{-1}([0,1]), but u_1 \not\in \mathcal{U}(1, \tilde{u}). The effect of this distinction is very minor, however, as it modifies sets by less than changing the horizon of uncertainty by any positive number \epsilon, however small.

Robustness and opportuneness

Uncertainty may be either pernicious or propitious. That is, uncertain variations may be either adverse or favorable. Adversity entails the possibility of failure, while favorability is the opportunity for sweeping success. Info-gap decision theory is based on quantifying these two aspects of uncertainty, and choosing an action which addresses one or the other or both of them simultaneously. The pernicious and propitious aspects of uncertainty are quantified by two "immunity functions": the robustness function expresses the immunity to failure, while the opportuneness function expresses the immunity to windfall gain.

Robustness and opportuneness functions

The robustness function expresses the greatest level of uncertainty at which failure cannot occur; the opportuneness function is the least level of uncertainty which entails the possibility of sweeping success. The robustness and opportuneness functions address, respectively, the pernicious and propitious facets of uncertainty.

Let q be a decision vector of parameters such as design variables, time of initiation, model parameters or operational options. We can verbally express the robustness and opportuneness functions as the maximum or minimum of a set of values of the uncertainty parameter \alpha of an info-gap model:


{\hat{\alpha}}(q) = \max \{ \alpha: \ \mbox{minimal requirements are always satisfied}\}
(robustness) (1a)

{\hat{\beta}}(q) = \min \{ \alpha: \ \mbox{sweeping success is possible}\}
(opportuneness) (2a)

Formally,


{\hat{\alpha}}(q) = \max \{ \alpha: \ \mbox{minimal requirements are satisfied for all } u \in \mathcal{U}(\alpha,\tilde u)\}
(robustness) (1b)

{\hat{\beta}}(q) = \min \{ \alpha: \ \mbox{windfall is achieved for at least one } u \in \mathcal{U}(\alpha,\tilde u)
\}
(opportuneness) (2b)

We can "read" eq. (1) as follows. The robustness {\hat{\alpha}}(q) of decision vector q is the greatest value of the horizon of uncertainty \alpha for which specified minimal requirements are always satisfied. {\hat{\alpha}}(q) expresses robustness — the degree of resistance to uncertainty and immunity against failure — so a large value of {\hat{\alpha}}(q) is desirable. Robustness is defined as a worst-case scenario up to the horizon of uncertainty: how large can the horizon of uncertainty be and still, even in the worst case, achieve the critical level of outcome?

Eq. (2) states that the opportuneness {\hat{\beta}}(q) is the least level of uncertainty \alpha which must be tolerated in order to enable the possibility of sweeping success as a result of decisions q. {\hat{\beta}}(q) is the immunity against windfall reward, so a small value of {\hat{\beta}}(q) is desirable. A small value of {\hat{\beta}}(q) reflects the opportune situation that great reward is possible even in the presence of little ambient uncertainty. Opportuneness is defined as a best-case scenario up to the horizon of uncertainty: how small can the horizon of uncertainty be and still, in the best case, achieve the windfall reward?

The immunity functions {\hat{\alpha}}(q) and {\hat{\beta}}(q) are complementary and are defined in an anti-symmetric sense. Thus "bigger is better" for {\hat{\alpha}}(q) while "big is bad" for {\hat{\beta}}(q). The immunity functions — robustness and opportuneness — are the basic decision functions in info-gap decision theory.

Optimization

The robustness function involves a maximization, but not of the performance or outcome of the decision: in general the outcome could be arbitrarily bad. Rather, it maximizes the level of uncertainty that would be required for the outcome to fail.

The greatest tolerable uncertainty is found at which decision q satisfices the performance at a critical survival-level. One may establish one's preferences among the available actions q, \, q^\prime,\, \ldots according to their robustnesses {\hat{\alpha}}(q),\, {\hat{\alpha}}(q^\prime), \, \ldots , whereby larger robustness engenders higher preference. In this way the robustness function underlies a satisficing decision algorithm which maximizes the immunity to pernicious uncertainty.

The opportuneness function in eq. (2) involves a minimization, however not, as might be expected, of the damage which can accrue from unknown adverse events. The least horizon of uncertainty is sought at which decision q enables (but does not necessarily guarantee) large windfall gain. Unlike the robustness function, the opportuneness function does not satisfice, it "windfalls". Windfalling preferences are those which prefer actions for which the opportuneness function takes a small value. When {\hat{\beta}}(q) is used to choose an action q, one is "windfalling" by optimizing the opportuneness from propitious uncertainty in an attempt to enable highly ambitious goals or rewards.

Given a scalar reward function R(q,u), depending on the decision vector q and the info-gap-uncertain function u, the minimal requirement in eq. (1) is that the reward R(q,u) be no less than a critical value {r_{\rm c}}. Likewise, the sweeping success in eq. (2) is attainment of a "wildest dream" level of reward {r_{\rm w}} which is much greater than {r_{\rm c}}. Usually neither of these threshold values, {r_{\rm c}} and {r_{\rm w}}, is chosen irrevocably before performing the decision analysis. Rather, these parameters enable the decision maker to explore a range of options. In any case the windfall reward {r_{\rm w}} is greater, usually much greater, than the critical reward {r_{\rm c}}:


{r_{\rm w}} > {r_{\rm c}}

The robustness and opportuneness functions of eqs. (1) and (2) can now be expressed more explicitly:


{\hat{\alpha}}(q, {r_{\rm c}}) = \max \left \{ \alpha�:
r_{\rm c} \leq \min_{u \in \mathcal{U}(\alpha, \tilde{u})} R(q,u) \right \}
(3)

{\hat{\beta}}(q, {r_{\rm w}}) = \min \left \{ \alpha�:
r_{\rm w} \leq
\max_{u \in \mathcal{U}(\alpha, \tilde{u})} R(q,u) \right \}
(4)

{\hat{\alpha}}(q, {r_{\rm c}}) is the greatest level of uncertainty consistent with guaranteed reward no less than the critical reward {r_{\rm c}}, while {\hat{\beta}}(q, {r_{\rm w}}) is the least level of uncertainty which must be accepted in order to facilitate (but not guarantee) windfall as great as {r_{\rm w}}. The complementary or anti-symmetric structure of the immunity functions is evident from eqs. (3) and (4).

These definitions can be modified to handle multi-criterion reward functions. Likewise, analogous definitions apply when R(q,u) is a loss rather than a reward.

Decision rules

Based on these function, one can then decided on a course of action by optimizing for uncertainty: choose the decision which is most robust (can withstand the greatest uncertainty; "satisficing"), or choose the decision which requires the least uncertainty to achieve a windfall.

Formally, optimizing for robustness or optimizing for opportuneness yields a preference relation on the set of decisions, and the decision rule is the "optimize with respect to this preference".

In the below, let \mathcal{Q} be the set of all available or feasible decision vectors q.

Robust-satisficing

The robustness function generates robust-satisficing preferences on the options: decisions are ranked in increasing order of robustness, for a given critical reward, i.e., by {\hat{\alpha}}(q, {r_{\rm c}}) value, meaning q \succ _{\rm r} q^\prime if {\hat{\alpha}}(q, {r_{\rm c}}) > {\hat{\alpha}}(q^\prime, {r_{\rm c}}).

A robust-satisficing decision is one which maximizes the robustness and satisfices the performance at the critical level {r_{\rm c}}.

Denote the maximum robustness by \hat{\alpha}, (formally \hat{\alpha}({r_{\rm c}}), for the maximum robustness for a given critical reward), and the corresponding decision (or decisions) by \hat{q}_{{\rm c}} (formally, {\hat{q}_{{\rm c}}}({r_{\rm c}}), the critical optimizing action for a given level of critical reward):

\begin{align}
\hat{\alpha}({r_{\rm c}}) &= \max_{q \in \mathcal{Q}} {\hat{\alpha}}(q, {r_{\rm c}})\\
{\hat{q}_{{\rm c}}}({r_{\rm c}}) &= \arg \max_{q \in \mathcal{Q}} {\hat{\alpha}}(q, {r_{\rm c}})
\end{align}

Usually, though not invariably, the robust-satisficing action {\hat{q}_{{\rm c}}}({r_{\rm c}}) depends on the critical reward {r_{\rm c}}.

Opportune-windfalling

Conversely, one may optimize opportuneness: the opportuneness function generates opportune-windfalling preferences on the options: decisions are ranked in decreasing order of opportuneness, for a given windfall reward, i.e., by {\hat{\beta}}(q, {r_{\rm c}}) value, meaning q \succ _{\rm w} q^\prime if {\hat{\beta}}(q, {r_{\rm w}}) < {\hat{\beta}}(q^\prime, {r_{\rm w}}).

The opportune-windfalling decision, {\hat{q}_{{\rm w}}}({r_{\rm w}}), minimizes the opportuneness function on the set of available decisions.

Denote the minimum opportuneness by \hat{\beta}, (formally \hat{\beta}({r_{\rm w}}), for the minimum opportuneness for a given windfall reward), and the corresponding decision (or decisions) by \hat{q}_{{\rm w}} (formally, {\hat{q}_{{\rm w}}}({r_{\rm w}}), the windfall optimizing action for a given level of windfall reward):

\begin{align}
\hat{\beta}        ({r_{\rm w}})
  &=      \min_{q \in \mathcal{Q}} {\hat{\beta}}(q, {r_{\rm w}})\\
{\hat{q}_{{\rm w}}}({r_{\rm w}})
  &= \arg \min_{q \in \mathcal{Q}} {\hat{\beta}}(q, {r_{\rm w}})
\end{align}

The two preference rankings, as well as the corresponding the optimal decisions {\hat{q}_{{\rm c}}}({r_{\rm c}}) and {\hat{q}_{{\rm w}}}({r_{\rm w}}), may be different, and may vary depending on the values of {r_{\rm c}} and {r_{\rm w}}.

Applications

Info-gap theory has generated a lot of literature. Info-gap theory has been studied or applied in a range of applications including engineering [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] ,[18] biological conservation [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] ,[30] theoretical biology ,[31] homeland security ,[32] economics [33] [34] ,[35] project management [36] [37] [38] and statistics .[39] Foundational issues related to info-gap theory have also been studied [40] [41] [42] [43] [44] .[45]

The remainder of this section describes in a little more detail the kind of uncertainties addressed by info-gap theory. Although many published works are mentioned below, no attempt is made here to present insights from these papers. The emphasis is not upon elucidation of the concepts of info-gap theory, but upon the context where it is used and the goals.

Engineering

A typical engineering application is the vibration analysis of a cracked beam, where the location, size, shape and orientation of the crack is unknown and greatly influence the vibration dynamics.[9] Very little is usually known about these spatial and geometrical uncertainties. The info-gap analysis allows one to model these uncertainties, and to determine the degree of robustness - to these uncertainties - of properties such as vibration amplitude, natural frequencies, and natural modes of vibration. Another example is the structural design of a building subject to uncertain loads such as from wind or earthquakes.[8][10] The response of the structure depends strongly on the spatial and temporal distribution of the loads. However, storms and earthquakes are highly idiosyncratic events, and the interaction between the event and the structure involves very site-specific mechanical properties which are rarely known. The info-gap analysis enables the design of the structure to enhance structural immunity against uncertain deviations from design-base or estimated worst-case loads. Another engineering application involves the design of a neural net for detecting faults in a mechanical system, based on real-time measurements. A major difficulty is that faults are highly idiosyncratic, so that training data for the neural net will tend to differ substantially from data obtained from real-time faults after the net has been trained. The info-gap robustness strategy enables one to design the neural net to be robust to the disparity between training data and future real events.[11][13]

Biology

Biological systems are vastly more complex and subtle than our best models, so the conservation biologist faces substantial info-gaps in using biological models. For instance, Levy et al. [19] use an info-gap robust-satisficing "methodology for identifying management alternatives that are robust to environmental uncertainty, but nonetheless meet specified socio-economic and environmental goals." They use info-gap robustness curves to select among management options for spruce-budworm populations in Eastern Canada. Burgman [46] uses the fact that the robustness curves of different alternatives can intersect, to illustrate a change in preference between conservation strategies for the orange-bellied parrot.

Project management

Project management is another area where info-gap uncertainty is common. The project manager often has very limited information about the duration and cost of some of the tasks in the project, and info-gap robustness can assist in project planning and integration.[37] Financial economics is another area where the future is fraught with surprises, which may be either pernicious or propitious. Info-gap robustness and opportuneness analyses can assist in portfolio design, credit rationing, and other applications.[33]

Limitations

In applying info-gap theory, one must remain aware of certain limitations.

Firstly, info-gap makes assumptions, namely on universe in question, and the degree of uncertainty – the info-gap model is a model of degrees of uncertainty or similarity of various assumptions, within a given universe. Info-gap does not make probability assumptions within this universe – it is non-probabilistic – but does quantify a notion of "distance from the estimate". In brief, info-gap makes fewer assumptions than a probabilistic method, but does make some assumptions.

Further, unforeseen events (those not in the universe \mathfrak{U}) are not incorporated: info-gap addresses modeled uncertainty, not unexpected uncertainty, as in black swan theory, particularly the ludic fallacy. This is not a problem when the possible events by definition fall in a given universe, but in real world applications, significant events may be "outside model". For instance, a simple model of daily stock market returns – which by definition fall in the range [-100\%,%2B\infty\%) – may include extreme moves such as Black Monday (1987) but might not model the market breakdowns following the September 11 attacks: it considers the "known unknowns", not the "unknown unknowns". This is a general criticism of much decision theory, and is by no means specific to info-gap, but nor is info-gap immune to it.

Secondly, there is no natural scale: is uncertainty of \alpha = 1 small or large? Different models of uncertainty give different scales, and require judgment and understanding of the domain and the model of uncertainty. Similarly, measuring differences between outcomes requires judgment and understanding of the domain.

Thirdly, if the universe under consideration is larger than a significant horizon of uncertainty, and outcomes for these distant points is significantly different from points near the estimate, then conclusions of robustness or opportuneness analyses will generally be: "one must be very confident of one's assumptions, else outcomes may be expected to vary significantly from projections" – a cautionary conclusion.

Disclaimer and Summary

The robustness and opportuneness functions can inform decision. For example, a change in decision increasing robustness may increase or decrease opportuneness. From a subjective stance, robustness and opportuneness both trade-off against aspiration for outcome: robustness and opportuneness deteriorate as the decision maker's aspirations increase. Robustness is zero for model-best anticipated outcomes. Robustness curves for alternative decisions may cross as a function of aspiration, implying reversal of preference.

Various theorems identify conditions where larger info-gap robustness implies larger probability of success, regardless of the underlying probability distribution. However, these conditions are technical, and do not translate into any common-sense, verbal recommendations, limiting such applications of info-gap theory by non-experts.

Criticism

A general criticism of non-probabilistic decision rules, discussed in detail at decision theory: alternatives to probability theory, is that optimal decision rules (formally, admissible decision rules) can always be derived by probabilistic methods, with a suitable utility function and prior distribution (this is the statement of the complete class theorems), and thus that non-probabilistic methods such as info-gap are unnecessary and do not yield new or better decision rules.

A more general criticism of decision making under uncertainty is the impact of outsized, unexpected events, ones that are not captured by the model. This is discussed particularly in black swan theory, and info-gap, used in isolation, is vulnerable to this, as are a fortiori all decision theory that uses a fixed universe of possibilities, notably probabilistic ones.

In criticism specific to info-gap, Sniedovich[47] raises two objections to info-gap decision theory, one substantive, one scholarly:

1. the info-gap uncertainty model is flawed and oversold
Info-gap models uncertainty via a nested family of subsets around a point estimate, and is touted as applicable under situations of "severe uncertainty". Sniedovich argues that under severe uncertainty, one should not start from a point estimate, which is assumed to be seriously flawed: instead the set one should consider is the universe of possibilities, not subsets thereof. Stated alternatively, under severe uncertainty, one should use global decision theory (consider the entire region of uncertainty), not local decision theory (starting with a point estimate and considering deviations from it).
2. info-gap is maximin
Ben-Haim (2006, p.xii) claims that info-gap is "radically different from all current theories of decision under uncertainty," while Sniedovich argues that info-gap's robustness analysis is precisely maximin analysis of the horizon of uncertainty. By contrast, Ben-Haim states (Ben-Haim 1999, pp. 271–2) that "robust reliability is emphatically not a [min-max] worst-case analysis". Note that Ben-Haim compares info-gap to minimax, while Sniedovich considers it a case of maximin.

Sniedovich has challenged the validity of info-gap theory for making decisions under severe uncertainty. He questions the effectiveness of info-gap theory in situations where the best estimate \displaystyle \tilde{u} is a poor indication of the true value of \displaystyle u. Sniedovich notes that the info-gap robustness function is "local" to the region around \displaystyle \tilde{u}, where \displaystyle \tilde{u} is likely to be substantially in error. He concludes that therefore the info-gap robustness function is an unreliable assessment of immunity to error.

Maximin

Sniedovich argues that info-gap's robustness model is maximin analysis of, not the outcome, but the horizon of uncertainty: it chooses an estimate such that one maximizes the horizon of uncertainty \alpha such that the minimal (critical) outcome is achieved, assuming worst-case outcome for a particular horizon. Symbolically, max \alpha assuming min (worst-case) outcome, or maximin.

In other words, while it is not a maximin analysis of outcome over the universe of uncertainty, it is a maximin analysis over a properly construed decision space.

Ben-Haim argues that info-gap's robustness model is not min-max/maximin analysis because it is not worst case analysis of outcomes; it is a satisficing model, not an optimization model – a (straight-forward) maximin analysis would consider worst-case outcomes over the entire space which, since uncertainty is often potentially unbounded, would yield an unbounded bad worst case.

Stability radius

Sniedovich[3] has shown that info-gap's robustness model is a simple stability radius model, namely a local stability model of the generic form

\hat{\rho}(\tilde{p}):= \max \ \{\rho\ge 0: p\in P(s),\forall p\in B(\rho,\tilde{p})\}

where B(\rho,\tilde{p}) denotes a ball of radius \rho centered at \tilde{p} and P(s) denotes the set of values of p that satisfy pre-determined stability conditions.

In other words, info-gap's robustness model is a stability radius model characterized by a stability requirement of the form r_{c}\le R(q,p). Since stability radius models are designed for the analysis of small perturbations in a given nominal value of a parameter, Sniedovich[3] argues that info-gap's robustness model is unsuitable for the treatment of severe uncertainty characterized by a poor estimate and a vast uncertainty space.

Discussion

Satisficing and bounded rationality

It is correct that the info-gap robustness function is local, and has restricted quantitative value in some cases. However, a major purpose of decision analysis is to provide focus for subjective judgments. That is, regardless of the formal analysis, a framework for discussion is provided. Without entering into any particular framework, or characteristics of frameworks in general, discussion follows about proposals for such frameworks.

Simon [48] introduced the idea of bounded rationality. Limitations on knowledge, understanding, and computational capability constrain the ability of decision makers to identify optimal choices. Simon advocated satisficing rather than optimizing: seeking adequate (rather than optimal) outcomes given available resources. Schwartz ,[49] Conlisk [50] and others discuss extensive evidence for the phenomenon of bounded rationality among human decision makers, as well as for the advantages of satisficing when knowledge and understanding are deficient. The info-gap robustness function provides a means of implementing a satisficing strategy under bounded rationality. For instance, in discussing bounded rationality and satisficing in conservation and environmental management, Burgman notes that "Info-gap theory ... can function sensibly when there are 'severe' knowledge gaps." The info-gap robustness and opportuneness functions provide "a formal framework to explore the kinds of speculations that occur intuitively when examining decision options." [51] Burgman then proceeds to develop an info-gap robust-satisficing strategy for protecting the endangered orange-bellied parrot. Similarly, Vinot, Cogan and Cipolla [52] discuss engineering design and note that "the downside of a model-based analysis lies in the knowledge that the model behavior is only an approximation to the real system behavior. Hence the question of the honest designer: how sensitive is my measure of design success to uncertainties in my system representation? ... It is evident that if model-based analysis is to be used with any level of confidence then ... [one must] attempt to satisfy an acceptable sub-optimal level of performance while remaining maximally robust to the system uncertainties."[52] They proceed to develop an info-gap robust-satisficing design procedure for an aerospace application.

Alternatives

Of course, decision in the face of uncertainty is nothing new, and attempts to deal with it have a long history. A number of authors have noted and discussed similarities and differences between info-gap robustness and minimax or worst-case methods [7][16][35][37] [53] .[54] Sniedovich [47] has demonstrated formally that the info-gap robustness function can be represented as a maximin optimization, and is thus related to Wald's minimax theory. Sniedovich [47] has claimed that info-gap's robustness analysis is conducted in the neighborhood of an estimate that is likely to be substantially wrong, concluding that the resulting robustness function is equally likely to be substantially wrong.

On the other hand, the estimate is the best one has, so it is useful to know if it can err greatly and still yield an acceptable outcome. This critical question clearly raises the issue of whether robustness (as defined by info-gap theory) is qualified to judge whether confidence is warranted,[5][55] [56] and how it compares to methods used to inform decisions under uncertainty using considerations not limited to the neighborhood of a bad initial guess. Answers to these questions vary with the particular problem at hand. Some general comments follow.

Sensitivity analysis

Sensitivity analysis – how sensitive conclusions are to input assumptions – can be performed independently of a model of uncertainty: most simply, one may take two different assumed values for an input and compares the conclusions. From this perspective, info-gap can be seen as a technique of sensitivity analysis, though by no means the only.

Robust optimization

The robust optimization literature [57][58][59][60][61][62] provides methods and techniques that take a global approach to robustness analysis. These methods directly address decision under severe uncertainty, and have been used for this purpose for more than thirty years now. Wald's Maximin model is the main instrument used by these methods.

The principal difference between the Maximin model employed by info-gap and the various Maximin models employed by robust optimization methods is in the manner in which the total region of uncertainty is incorporated in the robustness model. Info-gap takes a local approach that concentrates on the immediate neighborhood of the estimate. In sharp contrast, robust optimization methods set out to incorporate in the analysis the entire region of uncertainty, or at least an adequate representation thereof. In fact, some of these methods do not even use an estimate.

Comparative analysis

Classical decision theory,[63][64] offers two approaches to decision-making under severe uncertainty, namely maximin and Laplaces' principle of insufficient reason (assume all outcomes equally likely); these may be considered alternative solutions to the problem info-gap addresses.

Further, as discussed at decision theory: alternatives to probability theory, probabilists, particularly Bayesians probabilists, argue that optimal decision rules (formally, admissible decision rules) can always be derived by probabilistic methods (this is the statement of the complete class theorems), and thus that non-probabilistic methods such as info-gap are unnecessary and do not yield new or better decision rules.

Maximin

As attested by the rich literature on robust optimization, maximin provides a wide range of methods for decision making in the face of severe uncertainty.

Indeed, as discussed in criticism of info-gap decision theory, info-gap's robustness model can be interpreted as an instance of the general maximin model.

Bayesian analysis

As for Laplaces' principle of insufficient reason, in this context it is convenient to view it as an instance of Bayesian analysis.

The essence of the Bayesian analysis is applying probabilities for different possible realizations of the uncertain parameters. In the case of Knightian (non-probabilistic) uncertainty, these probabilities represent the decision maker's "degree of belief" in a specific realization.

In our example, suppose there are only five possible realizations of the uncertain revenue to allocation function. The decision maker believes that the estimated function is the most likely, and that the likelihood decreases as the difference from the estimate increases. Figure 11 exemplifies such a probability distribution.

Now, for any allocation, one can construct a probability distribution of the revenue, based on his prior beliefs. The decision maker can then choose the allocation with the highest expected revenue, with the lowest probability for an unacceptable revenue, etc.

The most problematic step of this analysis is the choice of the realizations probabilities. When there is an extensive and relevant past experience, an expert may use this experience to construct a probability distribution. But even with extensive past experience, when some parameters change, the expert may only be able to estimate that A is more likely than B, but will not be able to reliably quantify this difference. Furthermore, when conditions change drastically, or when there is no past experience at all, it may prove to be difficult even estimating whether A is more likely than B.

Nevertheless, methodologically speaking, this difficulty is not as problematic as basing the analysis of a problem subject to severe uncertainty on a single point estimate and its immediate neighborhood, as done by info-gap. And what is more, contrary to info-gap, this approach is global, rather than local.

Still, it must be stressed that Bayesian analysis does not expressly concern itself with the question of robustness.

It should also be noted that Bayesian analysis raises the issue of learning from experience and adjusting probabilities accordingly. In other words, decision is not a one-stop process, but profits from a sequence of decisions and observations.

Classical decision theory perspective

Sniedovich[47] raises two objections to info-gap decision theory, from the point of view of classical decision theory, one substantive, one scholarly:

the info-gap uncertainty model is flawed and oversold
Info-gap models uncertainty via a nested family of subsets around a point estimate, and is touted as applicable under situations of "severe uncertainty". Sniedovich argues that under severe uncertainty, one should not start from a point estimate, which is assumed to be seriously flawed: instead the set one should consider is the universe of possibilities, not subsets thereof. Stated alternatively, under severe uncertainty, one should use global decision theory (consider the entire universe), not local decision theory (starting with an estimate and considering deviations from it).
info-gap is maximin
Ben-Haim (2006, p.xii) claims that info-gap is "radically different from all current theories of decision under uncertainty," while Sniedovich argues that info-gap's robustness analysis is precisely maximin analysis of the horizon of uncertainty. By contrast, Ben-Haim states (Ben-Haim 1999, pp. 271–2) that "robust reliability is emphatically not a [min-max] worst-case analysis".

Sniedovich has challenged the validity of info-gap theory for making decisions under severe uncertainty. He questions the effectiveness of info-gap theory in situations where the best estimate \displaystyle \tilde{u} is a poor indication of the true value of \displaystyle u. Sniedovich notes that the info-gap robustness function is "local" to the region around \displaystyle \tilde{u}, where \displaystyle \tilde{u} is likely to be substantially in error. He concludes that therefore the info-gap robustness function is an unreliable assessment of immunity to error.

In the framework of classical decision theory, info-gap's robustness model can be construed as an instance of Wald's Maximin model and its opportuneness model is an instance of the classical Minimin model. Both operate in the neighborhood of an estimate of the parameter of interest whose true value is subject to severe uncertainty and therefore is likely to be substantially wrong. Moreover, the considerations brought to bear upon the decision process itself also originate in the locality of this unreliable estimate, and so may or may not be reflective of the entire range of decisions and uncertainties.

Background, working assumptions, and a look ahead

Decision under severe uncertainty is a formidable task and the development of methodologies capable of handling this task is even a more arduous undertaking. Indeed, over the past sixty years an enormous effort has gone into the development of such methodologies. Yet, for all the knowledge and expertise that have accrued in this area of decision theory, no fully satisfactory general methodology is available to date.

Now, as portrayed in the info-gap literature, Info-Gap was designed expressly as a methodology for solving decision problems that are subject to severe uncertainty. And what is more, its aim is to seek solutions that are robust.

Thus, to have a clear picture of info-gap's modus operandi and its role and place in decision theory and robust optimization, it is imperative to examine it within this context. In other words, it is necessary to establish info-gap's relation to classical decision theory and robust optimization. To this end, the following questions must be addressed:

Two important points need to be elucidated in this regard at the outset:

So, first let us clarify the assumptions that are implied by severe uncertainty.

Working assumptions

Info-gap decision theory employs three simple constructs to capture the uncertainty associated with decision problems:

  1. A parameter \displaystyle u whose true value is subject to severe uncertainty.
  2. A region of uncertainty \displaystyle \mathfrak{U}\ where the true value of \displaystyle u \ lies.
  3. An estimate \ \displaystyle \tilde{u}\ of the true value of \displaystyle u \ .

It should be pointed out, though, that as such these constructs are generic, meaning that they can be employed to model situations where the uncertainty is not severe but mild, indeed very mild. So it is vital to be clear that to give apt expression to the severity of the uncertainty, in the Info-Gap framework these three constructs are given specific meaning.

Working Assumptions
  1. The region of uncertainty \displaystyle \mathfrak{U}\ is relatively large.
    In fact, Ben-Haim (2006, p. 210) indicates that in the context of info-gap decision theory most of the commonly encountered regions of uncertainty are unbounded.
  2. The estimate \displaystyle \tilde{u}\ is a poor approximation of the true value of \displaystyle \ u\ .
    That is, the estimate is a poor indication of the true value of \displaystyle \ u\ (Ben-Haim, 2006, p. 280) and is likely to be substantially wrong (Ben-Haim, 2006, p. 281).

In the picture \displaystyle  u^{\circ}\ represents the true (unknown) value of \ \displaystyle u\ .

The point to note here is that conditions of severe uncertainty entail that the estimate \displaystyle  \tilde{u}\ can—relatively speaking—be very distant from the true value \displaystyle  u^{\circ}\ . This is particularly pertinent for methodologies, like info-gap, that seek robustness to uncertainty. Indeed, assuming otherwise would—methodologically speaking—be tantamount to engaging in wishful thinking.

In short, the situations that info-gap is designed to take on are demanding in the extreme. Hence, the challenge that one faces conceptually, methodologically and technically is considerable. It is essential therefore to examine whether info-gap robustness analysis succeeds in this task, and whether the tools that it deploys in this effort are different from those made available by Wald's (1945) Maximin paradigm especially for robust optimization.

So let us take a quick look at this stalwart of classical decision theory and robust optimization.

Wald's Maximin paradigm

The basic idea behind this famous paradigm can be expressed in plain language as follows:

Maximin Rule
The maximin rule tells us to rank alternatives by their worst possible outcomes: we are to adopt the alternative the worst outcome of which is superior to the worst outcome of the others.
Rawls [65](1971, p. 152)

Thus, according to this paradigm, in the framework of decision-making under severe uncertainty, the robustness of an alternative is a measure of how well this alternative can cope with the worst uncertain outcome that it can generate. Needless to say, this attitude towards severe uncertainty often leads to the selection of highly conservative alternatives. This is precisely the reason that this paradigm is not always a satisfactory methodology for decision-making under severe uncertainty (Tintner 1952).

As indicated in the overview, info-gap's robustness model is a Maximin model in disguise. More specifically, it is a simple instance of Wald's Maximin model where:

  1. The region of uncertainty associated with an alternative decision is an immediate neighborhood of the estimate \displaystyle \tilde{u}\ .
  2. The uncertain outcomes of an alternative are determined by a characteristic function of the performance requirement under consideration.

Thus, aside from the conservatism issue, a far more serious issue must be addressed. This is the validity issue arising from the local nature of info-gap's robustness analysis.

Local vs global robustness

The validity of the results generated by info-gap's robustness analysis are crucially contingent on the quality of the estimate \displaystyle \tilde{u}\ . Alas, according to info-gap's own working assumptions, this estimate is poor and likely to be substantially wrong (Ben-Haim, 2006, p. 280-281).

The trouble with this feature of info-gap's robustness model is brought out more forcefully by the picture. The white circle represents the immediate neighborhood of the estimate \ \displaystyle \tilde{u}\ on which the Maximin analysis is conducted. Since the region of uncertainty is large and the quality of the estimate is poor, it is very likely that the true value of \ \displaystyle u\ is distant from the point at which the Maximin analysis is conducted.

So given the severity of the uncertainty under consideration, how valid/useful can this type of Maximin analysis really be?

The critical issue here is then to what extent can a local robustness analysis a la Maximin in the immediate neighborhood of a poor estimate aptly represent a large region of uncertainty. This is a serious issue that must be dealt with in this article.

It should be pointed out that, in comparison, robust optimization methods invariably take a far more global view of robustness. So much so that scenario planning and scenario generation are central issues in this area. This reflects a strong commitment to an adequate representation of the entire region of uncertainty in the definition of robustness and in the robustness analysis itself.

And finally there is another reason why the intimate relation to Maximin is crucial to this discussion. This has to do with the portrayal of info-gap's contribution to the state of the art in decision theory, and its role and place vis-a-vis other methodologies.

Role and place in decision theory

Info-gap is emphatic about its advancement of the state of the art in decision theory (color is used here for emphasis):

Info-gap decision theory is radically different from all current theories of decision under uncertainty. The difference originates in the modelling of uncertainty as an information gap rather than as a probability.

Ben-Haim (2006, p.xii)
In this book we concentrate on the fairly new concept of information-gap uncertainty, whose differences from more classical approaches to uncertainty are real and deep. Despite the power of classical decision theories, in many areas such as engineering, economics, management, medicine and public policy, a need has arisen for a different format for decisions based on severely uncertain evidence.
Ben-Haim (2006, p. 11)

These strong claims must be substantiated. In particular, a clear-cut, unequivocal answer must be given to the following question: in what way is info-gap's generic robustness model different, indeed radically different, from worst-case analysis a la Maximin?

Subsequent sections of this article describe various aspects of info-gap decision theory and its applications, how it proposes to cope with the working assumptions outlined above, the local nature of info-gap's robustness analysis and its intimate relationship with Wald's classical Maximin paradigm and worst-case analysis.

Invariance property

The main point to keep in mind here is that info-gap's raison d'être is to provide a methodology for decision under severe uncertainty. This means that its primary test would be in the efficacy of its handling of and coping with severe uncertainty. To this end it must be established first how Info-Gap's robustness/opportuneness models behave/fare, as the severity of the uncertainty is increased/decreased.

Second, it must be established whether info-gap's robustness/opportuneness models give adequate expression to the potential variability of the performance function over the entire region of uncertainty. This is particularly important because Info—Gap is usually concerned with relatively large, indeed unbounded, regions of uncertainty.

So, let \ \displaystyle \mathfrak{U} \ denote the total region of uncertainty and consider these key questions:

  • How does the robustness/opportuneness analysis respond to an increase/decrease in the size of \ \displaystyle \mathfrak{U} \ ?
  • How does an increase/decrease in the size of \ \displaystyle \mathfrak{U} \ affect the robustness or opportuneness of a decision?
  • How representative are the results generated by info-gap's robustness/opportuneness analysis of what occurs in the relatively large total region of uncertainty \ \displaystyle \mathfrak{U} \ ?

Suppose then that the robustness \ \displaystyle \hat{\alpha}(q,r_{c}) \ has been computed for a decision \ \displaystyle q\in \mathcal{Q}\ and it is observed that \ \displaystyle \ \mathcal{U}(\alpha^{*},\tilde{u}) \subseteq \mathfrak{U}\ where \ \displaystyle \alpha^{*}=\hat{\alpha}(q,r_{c}) %2B \varepsilon \   for some \ \displaystyle \varepsilon > 0\ .

The question is then: how would the robustness of \ \displaystyle q \ , namely \ \displaystyle \hat{\alpha}(q,r_{c}) \ , be affected if the region of uncertainty would be say, twice as large as \ \displaystyle \mathfrak{U} \ , or perhaps even 10 times as large as \ \displaystyle \mathfrak{U} \ ?

Consider then the following result which is a direct consequence of the local nature of info-gap's robustness/opportuneness analysis and the nesting property of info-gaps' regions of uncertainty (Sniedovich 2007):

Invariance Theorem

The robustness of decision \ \displaystyle q \ is invariant with the size of the total region of uncertainty \ \displaystyle \mathfrak{U} \ for all \ \displaystyle \mathfrak{U} \ such that

(7) \mathcal{U}(\hat{\alpha}(q,r_{c})%2B\varepsilon,\tilde{u}) \subseteq \mathfrak{U}\   for some \ \displaystyle \varepsilon > 0\ .               \Box

In other words, for any given decision, info-gap's analysis yields the same results for all total regions of uncertainty that contain \ \displaystyle \ \mathcal{U}(\alpha^{*},\tilde{u}) \ . This applies to both the robustness and opportuneness models.

This is illustrated in the picture: the robustness of a given decision does not change notwithstanding an increase in the region of uncertainty from \ \displaystyle \mathfrak{U} \ to \ \displaystyle \mathfrak{U}''' \ .

In short, by dint of focusing exclusively on the immediate neighborhood of the estimate \ \displaystyle \tilde{u} \ info-gap's robustness/opportuneness models are inherently local. For this reason they are -- in principle -- incapable of incorporating in the analysis of \ \displaystyle \hat{\alpha}(q,r_{c}) \ and \ \displaystyle \hat{\beta}(q,r_{c}) \ regions of uncertainty that lie outside the neighborhoods \mathcal{U}(\hat{\alpha}(q,r_{c}),\tilde{u})\ and \mathcal{U}(\hat{\beta}(q,r_{c}),\tilde{u})\ of the estimate \ \displaystyle \tilde{u} \ , respectively.

To illustrate, consider a simple numerical example where the total region of uncertainty is \mathfrak{U}=(-\infty,\infty),\ the estimate is \ \displaystyle \tilde{u}=0 \ and for some decision \ \displaystyle \hat{q} \ we obtain \mathcal{U}(\hat{\alpha}(\hat{q},r_{c}),\tilde{u})=(-2,2). The picture is this:

where the term "No man's land"   refers to the part of the total region of uncertainty that is outside the region \ \displaystyle \mathcal{U}(\hat{\alpha}(q,r_{c})%2B\varepsilon,\tilde{u}) \ .

Note that in this case the robustness of decision \ \displaystyle \hat{q} \ is based on its (worst-case) performance over no more than a minuscule part of the total region of uncertainty that is an immediate neighborhood of the estimate \ \displaystyle \tilde{u} \ . Since usually info-gap's total region of uncertainty is unbounded, this illustration represents a usual   case rather than an exception.

The thing to note then is that info-gap's robustness/opportuneness are by definition local properties. As such they cannot assess the performance of decisions over the total region of uncertainty. For this reason it is not clear how Info-Gap's Robustness/Opportuneness models can provide a meaningful/sound/useful basis for decision under sever uncertainty where the estimate is poor and is likely to be substantially wrong.

This crucial issue is addressed in subsequent sections of this article.

Maximin/Minimin: playing robustness/opportuneness games with Nature

For well over sixty years now Wald's Maximin model has figured in classical decision theory and related areas – such as robust optimization - as the foremost non-probabilistic paradigm for modeling and treatment of severe uncertainty.

Info-gap is propounded (e.g. Ben-Haim 2001, 2006) as a new non-probabilistic theory that is radically different from all current decision theories for decision under uncertainty. So, it is imperative to examine in this discussion in what way, if any, is info-gap's robustness model radically different from Maximin. For one thing, there is a well-established assessment of the utility of Maximin. For example, Berger (Chapter 5)[66] suggests that even in situations where no prior information is available (a best case for Maximin), Maximin can lead to bad decision rules and be hard to implement. He recommends Bayesian methodology. And as indicated above,

It should also be remarked that the minimax principle even if it is applicable leads to an extremely conservative policy.

Tintner (1952, p. 25)[67]

However, quite apart from the ramifications that establishing this point might have for the utility of info-gaps' robustness model, the reason that it behooves us to clarify the relationship between info-gap and Maximin is the centrality of the latter in decision theory. After all, this is a major classical decision methodology. So, any theory claiming to furnish a new non-probabilistic methodology for decision under severe uncertainty would be expected to be compared to this stalwart of decision theory. And yet, not only is a comparison of info-gap's robustness model to Maximin absent from the three books expounding info-gap (Ben-Haim 1996, 2001, 2006), Maximin is not even mentioned in them as the major decision theoretic methodology for severe uncertainty that it is.

Elsewhere in the info-gap literature, one can find discussions dealing with similarities and differences between these two paradigms, as well as discussions on the relationship between info-gap and worst-case analysis,[7][16][35][37][53][68] However, the general impression is that the intimate connection between these two paradigms has not been identified. Indeed, the opposite is argued. For instance, Ben-Haim (2005[35]) argues that info-gap's robustness model is similar to Maximin but, is not a Maximin model.

The following quote eloquently expresses Ben-Haim's assessment of info-gap's relationship to Maximin and it provides ample motivation for the analysis that follows.

We note that robust reliability is emphatically not a worst-case analysis. In classical worst-case min-max analysis the designer minimizes the impact of the maximally damaging case. But an info-gap model of uncertainty is an unbounded family of nested sets:  \ \displaystyle \mathcal{U}(\alpha,\tilde{u}) \ , for all \ \displaystyle \alpha\ge 0 \ . Consequently, there is no worst case: any adverse occurrence is less damaging than some other more extreme event occurring at a larger value of \ \displaystyle \alpha \ . What Eq. (1) expresses is the greatest level of uncertainty consistent with no-failure. When the designer chooses q to maximize \ \displaystyle \hat{\alpha}(q, r_{c}) \ he is maximizing his immunity to an unbounded ambient uncertainty. The closest this comes to "min-maxing" is that the design is chosen so that "bad" events (causing reward \ \displaystyle  R\ less than \ \displaystyle  r_{c}\ ) occur as "far away" as possible (beyond a maximized value of \ \displaystyle \hat{\alpha} \ ).

Ben-Haim , 1999, pp. 271–2[69]

The point to note here is that this statement misses the fact that the horizon of uncertainty \ \displaystyle \alpha \ is bounded above (implicitly) by the performance requirement

 r_{c} \le R(q,u),\forall u\in \mathcal{U}(\alpha,\tilde{u})

and that info-gap conducts its worst-case analysis—one analysis at a time for a given \ \displaystyle \alpha \ge 0 \   -- within each of the regions of uncertainty \displaystyle \ \mathcal{U}(\alpha,\tilde{u}), \alpha\ge 0 \ .

In short, given the discussions in the info-gap literature on this issue, it is obvious that the kinship between info-gap's robustness model and Wald's Maximin model, as well as info-gap's kinship with other models of classical decision theory must be brought to light. So, the objective in this section is to place info-gap's robustness and opportuneness models in their proper context, namely within the wider frameworks of classical decision theory and robust optimization.

The discussion is based on the classical decision theoretic perspective outlined by Sniedovich (2007[70]) and on standard texts in this area (e.g. Resnik 1987,[63] French 1988[64]).

Certain parts of the exposition that follows have a mathematical slant.
This is unavoidable because info-gap's models are mathematical.

Generic models

The basic conceptual framework that classical decision theory provides for dealing with uncertainty is that of a two-player game. The two players are the decision maker (DM) and Nature, where Nature represents uncertainty. More specifically, Nature represents the DM's attitude towards uncertainty and risk.

Note that a clear distinction is made in this regard between a pessimistic decision maker and an optimistic decision maker, namely between a worst-case attitude and a best-case attitude. A pessimistic decision maker assumes that Nature plays against him whereas an optimistic decision maker assumes that Nature plays with him.

To express these intuitive notions mathematically, classical decision theory uses a simple model consisting of the following three constructs:

  • A set \ \displaystyle D representing the decision space available to the DM.
  • A set of sets \ \displaystyle \{S(d): d\in D\}\ representing state spaces associated with the decisions in \ \displaystyle D .
  • A function \ \displaystyle g=g(d,s) stipulating the outcomes generated by the decision-state pairs \ \displaystyle (d,s)\ .

The function \ \displaystyle g \ is called objective function, payoff function, return function, cost function etc.

The decision-making process (game) defined by these objects consists of three steps:

  • Step 1: The DM selects a decision \ \displaystyle d\in D \ .
  • Step 2: In response, given \ \displaystyle d\ , Nature selects a state \ \displaystyle s\in S(d)\ .
  • Step 3: The outcome \ \displaystyle g(d,s) is alloted to DM.

Note that in contrast to games considered in classical game theory, here the first player (DM) moves first so that the second player (Nature) knows what decision was selected by the first player prior to selecting her decision. Thus, the conceptual and technical complications regrding the existence of Nash equilibrium point are not pertinent here. Nature is not an independent player, it is a conceptual device describing the DM's attitude towards uncertainty and risk.

At first sight, the simplicity of this framework may strike one as naive. Yet, as attested by the variety of specific instances that it encompasses it is rich in possibilities, flexible, and versatile. For the purposes of this discussion it suffices to consider the following classical generic setup:


\begin{array}{cccc}
z^{*}= & \stackrel{DM}{\mathop{Opt}}&\stackrel{Nature}{\mathop{opt}}\quad & g(d,s)\\[-0.05in]
& d\in D & s\in S(d) &
\end{array}

where \ \displaystyle \mathop{Opt} \ and  \displaystyle \mathop{opt}\ represent the DM's and Nature's optimality criteria, respectively, that is, each is equal to either \ \displaystyle \max\ or \ \displaystyle \min\ .

If \ \displaystyle \mathop{Opt} = \mathop{opt}\ then the game is cooperative, and if \ \displaystyle \mathop{Opt} \neq \mathop{opt}\ then the game is non-cooperative. Thus, this format represents four cases: two non-cooperative games (Maximin and Minimax) and two cooperative games (Minimin, and Maximax). The respective formulations are as follows:


\begin{array}{c||c}
\textit{Worst-Case\ Pessimism} & \textit{Best-Case\ Optimism}\\
\hline
Maximin \ \ \ \ \ \ \ \ \ \ \ Minimax & Minimin \ \ \ \ \ \ \ \ \ \ \ \ \ Maximax\\
\displaystyle \max_{d\in D}\,\min_{s\in S(d)}\,g(d,s) \ \ \  \displaystyle \min_{d\in D}\,\max_{s\in S(d)}\,g(d,s)  & \displaystyle \min_{d\in D}\,\min_{s\in S(d)}\,g(d,s) \ \ \ \displaystyle \max_{d\in D}\,\max_{s\in S(d)}\,g(d,s)
\end{array}

Each case is specified by a pair of optimality criteria employed by DM and Nature. For example, Maximin depicts a situation where DM strives to maximize the outcome and Nature strives to minimize it. Similarly, the Minimin paradigm represents situations where both DM and Nature are striving to in minimize the outcome.

Of particular interest to this discussion are the Maximin and Minimin paradigms because they subsume info-gap's robustness and opportuneness models, respectively. So, here they are:

      Maximin Game:        \ \displaystyle \max_{d\in D}\,\min_{s\in S(d)}\,g(d,s)
  • Step 1: The DM selects a decision \ \displaystyle d\in D \ with a view to maximize the outcome \ \displaystyle g(d,s) \ .
  • Step 2: In response, given \ \displaystyle d\ , Nature selects a state in \ \displaystyle S(d)\ that minimizes  \ \displaystyle g(d,s) \ over \ \displaystyle S(d) \ .
  • Step 3: The outcome \ \displaystyle g(d,s) is alloted to DM.
      Minimin Game:        \ \displaystyle \min_{d\in D}\,\min_{s\in S(d)}\,g(d,s)
  • Step 1: The DM selects a decision \ \displaystyle d\in D \ with a view to minimizes the outcome \ \displaystyle g(d,s) \ .
  • Step 2: In response, given \ \displaystyle d\ , Nature selects a state in \ \displaystyle S(d)\ that minimizes  \ \displaystyle g(d,s) \ over \ \displaystyle S(d) \ .
  • Step 3: The outcome \ \displaystyle g(d,s) is alloted to DM.

With this in mind, consider now info-gap's robustness and opportuneness models.

Info-gap's robustness model

From a classical decision theoretic point of view info-gap's robustness model is a game between the DM and Nature, where the DM selects the value of \ \displaystyle \alpha \ (aiming for the largest possible) whereas Nature selects the worst value of \ \displaystyle  u \ in \ \displaystyle \mathcal{U}(\alpha,\tilde{u}) \ . In this context the worst value of \ \displaystyle u \ pertaining to a given \ \displaystyle (q,\alpha) \ pair is a \ \displaystyle  u\in \mathcal{U}(\alpha,\tilde{u}) \ that violates the performance requirement \ \displaystyle r_{c} \le R(q,u) \ . This is achieved by minimizing \ \displaystyle R(q,u)\ over \ \displaystyle \mathcal{U}(\alpha,\tilde{u})\ .

There are various ways to incorporate the DM's objective and Nature's antagonistic response in a single outcome. For instance, one can use the following characteristic function for this purpose:


\varphi(q,\alpha,u):=\begin{cases}
\quad \alpha &, \ \ r_{c} \le R(q,u) \\
-\infty &, \ \ r_{c} > R(q,u)
\end{cases}  \ , \  q\in \mathcal{Q}, \alpha\ge 0, u\in \mathcal{U}(\alpha,\tilde{u})

Note that, as desired, for any triplet \ \ (q,\alpha,u)\ of interest we have


r_{c} \le R(q,u) \ \ \ \longleftrightarrow \ \ \ \alpha \le \varphi(q,\alpha,u)

hence from the DM's point of view satisficing the performance constraint is equivalent to maximizing   \ \displaystyle \varphi(q,\alpha,u)\ .

In short,

      Info-gap's Maximin Robustness Game for decision \ \displaystyle q \ :        \ \displaystyle \hat{\alpha}(q,r_{c}):=\max_{\alpha \ge 0}\,\min_{u\in \mathcal{U}(\alpha,\tilde{u})}\,\varphi(q,\alpha,u)
  • Step 1: The DM selects an horizon of uncertainty \ \displaystyle \alpha\ge 0 \ with a view to maximize the outcome \ \displaystyle \varphi(q,\alpha,u) \ .
  • Step 2: In response, given \ \displaystyle \alpha \ , Nature selects a \ \displaystyle u \in \mathcal{U}(\alpha,\tilde{u})\ that minimizes  \ \displaystyle \varphi(q,\alpha,u) \ over \ \displaystyle \mathcal{U}(\alpha,\tilde{u}) \ .
  • Step 3: The outcome \ \displaystyle \varphi(q,\alpha,u) is alloted to DM.

Clearly, the DM's optimal alternative is to select the largest value of \ \displaystyle \alpha \ such that the worst \ \displaystyle u\in \mathcal{U}(\alpha,\tilde{u})\ satisfies the performance requirement.

Maximin Theorem

As shown in Sniedovich (2007),[47] Info-gap's robustness model is a simple instance of Wald's Maximin model. Specifically,


{\hat{\alpha}}(q, {r_{c}}) = \max \left \{ \alpha: \  {r_{\rm c}} \le  \min_{u \in \mathcal{U}(\alpha, \tilde{u})} R(q,u) \right \} = \max_{\alpha \ge 0} \min_{u \in \mathcal{U}(\alpha,\tilde{u})} \varphi(q,\alpha,u) \quad \quad \Box

Info-gap's opportuneness model

By the same token, info-gap's opportuneness model is a simple instance of the generic Minimin model. That is,


{\hat{\beta}}(q, {r_{c}}) = \min \left \{ \alpha: \  {r_{c}} \le  \max_{u \in \mathcal{U}(\alpha, \tilde{u})} R(q,u) \right \} = \min_{\alpha \ge 0} \min_{u \in \mathcal{U}(\alpha,\tilde{u})} \psi(q,\alpha,u)

where


\psi(q,\alpha,u) = \left\{\begin{matrix} \alpha &,& {r_{c}} \le  R(q,u)\\ \infty &,&{r_{ c}} > R(q,u) \end{matrix}\right. \ , \ \alpha \ge 0, u \in \mathcal{U}(\alpha,\tilde{u})

observing that, as desired, for any triplet \ \ (q,\alpha,u)\ of interest we have


r_{w} \le R(q,u) \ \ \ \longleftrightarrow \ \ \ \alpha \ge \psi(q,\alpha,u)

hence, for a given pair \ \displaystyle (q,\alpha)\ , the DM would satisfy the performance requirement via minimizing the outcome \ \displaystyle \psi(q,\alpha,u)\ over \ \displaystyle \mathcal{U}(\alpha,\tilde{u}) \ . Nature's behavior is a reflection of her sympathetic stance here.

Remark: This attitude towards risk and uncertainty which assumes that Nature will play with us,  is rather naive. As noted by Resnik (1987, p. 32[63]) "... But that rule surely would have few adherence...". Nevetheless it is often used in combination with the Maximin rule in the formulation of Hurwicz's optimism-pessimisim   rule (Resnik 1987,[63] French 1988[64]) with a view to mitigate the extreme conservatism of Maximin.

Mathematical programming formulations

To bring out more forcefully that info-gap's robustness model is an instance of the generic Maximin model, and info-gap's opportuneness model an instance of the generic Minimin model, it is instructive to examine the equivalent so called Mathematical Programming (MP) formats of these generic models (Ecker and Kupferschmid,[71] 1988, pp. 24–25; Thie 1988[72] pp. 314–317; Kouvelis and Yu,[59] 1997, p. 27):


\begin{array}{c|c|c}
\textit{Model} & \textit{Classical\  Format} &  \textit{MP\ Format}  \\
\hline 
\textit{Maximin:} & \displaystyle \max_{d\in D}\ \min_{s\in S(d)}\ g(d,s) &
\displaystyle \max_{d\in D,\alpha\in \mathbb{R}}\{\alpha: \alpha \le \min_{s\in S(d)} g(d,s)\} \\
\textit{Minimin:} & \displaystyle \min_{d\in D}\ \min_{s\in S(d)}\ g(d,s) &
\displaystyle \min_{d\in D,\alpha\in \mathbb{R}}\{\alpha: \alpha \ge \min_{s\in S(d)} g(d,s)\}
\end{array}

Thus, in the case of info-gap we have


\begin{array}{c|c|c|c}
\textit{Model} & \textit{Info-Gap\ Format} & \textit{MP\ Format} &  \textit{Classical\ Format}  \\
\hline 
\textit{Robustness} &\displaystyle \max\{\alpha: r_{c}\le \min_{u\in \mathcal{U}(\alpha,\tilde{u})} R(q,u)\}  &\displaystyle  \displaystyle \max\{\alpha: \alpha \le \min_{u\in \mathcal{U}(\alpha,\tilde{u})}\varphi(q,\alpha,u)\} & \displaystyle \max_{\alpha\ge 0}\ \min_{u\in \mathcal{U}(\alpha,\tilde{u})}\ \varphi(q,\alpha,u) \\
\textit{Opportuneness} &\displaystyle \min\{\alpha: r_{c}\le \max_{u\in \mathcal{U}(\alpha,\tilde{u})} R(q,u)\}  &\displaystyle  \displaystyle \min\{\alpha: \alpha \ge \min_{u\in \mathcal{U}(\alpha,\tilde{u})}\psi(q,\alpha,u)\} & \displaystyle \min_{\alpha\ge 0}\ \min_{u\in \mathcal{U}(\alpha,\tilde{u})}\ \psi(q,\alpha,u)
\end{array}

To verify the equivalence between info-gap's formats and the respective decision theoretic formats, recall that, by construction, for any triplet \ \displaystyle (q,\alpha,u)\ of interest we have


\alpha \le \varphi(q,\alpha,u)\ \ \  \longleftrightarrow \ \ \  r_{c} \le R(q,u)


\alpha \ge \psi(q,\alpha,u) \ \ \ \longleftrightarrow \ \ \ r_{w} \le R(q,u)

This means that in the case of robustness/Maximin, an antagonistic Nature will (effectively) minimize \ \displaystyle R(q,u) \ by minimizing \ \displaystyle \varphi(q,\alpha,u) \ whereas in the case of opportuneness/Minimin a sympathetic Nature will (effectively) maximize \ \displaystyle R(q,u) \ by minimizing \ \displaystyle \psi(q,\alpha,u) \ .

Summary

Info-gap's robustness analysis stipulates that given a pair \ \displaystyle (q,\alpha)\ , the worst element of \ \displaystyle \mathcal{U}(\alpha,\tilde{u})\ is realized. This of course is a typical Maximin analysis. In the parlance of classical decision theory:

The Robustness of decision \ \displaystyle q \ is the largest horizon of uncertainty, \ \displaystyle \alpha \ , such that the worst value of \ \displaystyle u \ in \ \displaystyle \mathcal{U}(\alpha,\tilde{u}) \ satisfies the performance requirement \ \displaystyle r_{c} \le R(q,u) \ .

Similarly, info-gap's opportuneness analysis stipulates that given a pair \ \displaystyle (q,\alpha)\ , the best element of \ \displaystyle \mathcal{U}(\alpha,\tilde{u})\ is realized. This of course is a typical Minimin analysis. In the parlance of classical decision theory:

The Opportuneness of decision \ \displaystyle q \ is the smallest horizon of uncertainty, \ \displaystyle \alpha \ , such that the best value of \ \displaystyle u \ in \ \displaystyle \mathcal{U}(\alpha,\tilde{u}) \ satisfies the performance requirement \ \displaystyle r_{w} \le R(q,u) \ .

The mathematical transliterations of these concepts are straightforward, resulting in typical Maximin/Minimin models, respectively.

Far from being restrictive, the generic Maximin/Minimin models' lean structure is a blessing in disguise. The main point here is that the abstract character of the three basic constructs of the generic models

  • Decision
  • State
  • Outcome

in effect allows for great flexibility in modeling.

A more detailed analysis is therefore required to bring out the full force of the relationship between info-gap and generic classical decision theoretic models. See #Notes on the art of math modeling.

Treasure hunt

The following is a pictorial summary of Sniedovich's (2007) discussion on local vs global robustness. For illustrative purposes it is cast here as a Treasure Hunt. It shows how the elements of info-gap's robustness model relate to one another and how the severe uncertainty is treated in the model.

(1) You are in charge of a treasure hunt on a large island somewhere in the Asia/Pacific region. You consult a portfolio of search strategies. You need to decide which strategy would be best for this particular expedition.

(2) The difficulty is that the treasure's exact location on the island is unknown. There is a severe gap between what you need to know—the true location of the treasure—and what you actually know—a poor estimate of the true location.

(3) Somehow you compute an estimate of the true location of the treasure. Since we are dealing here with severe uncertainty, we assume—methodologically speaking—that this estimate is a poor indication of the true location and is likely to be substantially wrong.

(4) To determine the robustness of a given strategy, you conduct a local worst-case analysis in the immediate neighborhood of the poor estimate. Specifically, you compute the largest safe deviation from the poor estimate that does not violate the performance requirement.

(5) You compute the robustness of each search strategy in your portfolio and you select the one whose robustness is the largest.

(6) To remind yourself and the financial backers of the expedition that this analysis is subject to severe uncertainty in the true location of the treasure, it is important—methodologically speaking—to display the true location on the map. Of course, you do not know the true location. But given the severity of the uncertainty, you place it at some distance from the poor estimate. The more severe the uncertainty, the greater should the distance (gap) between the true location and the estimate be.

Epilogue:
According to Sniedovich (2007) this is an important reminder of the central issue in decision-making under severe uncertainty. The estimate we have is a poor indication of the true value of the parameter of interest and is likely to be substantially wrong. Therefore, in the case of info-gap it is important to show the gap on the map by displaying the true value of \ \displaystyle u \ somewhere in the region of uncertainty.

The small red \ \clubsuit\ represents the true (unknown) location of the treasure.

In summary:

Info-gap's robustness model is a mathematical representation of a local worst-case analysis in the neighborhood of a given estimate of the true value of the parameter of interest. Under severe uncertainty the estimate is assumed to be a poor indication of the true value of the parameter and is likely to be substantially wrong.

The fundamental question therefore is: Given the

  • Severity of the uncertainty
  • Local nature of the analysis
  • Poor quality of the estimate

how meaningful and useful are the results generated by the analysis, and how sound is the methodology as a whole?

More on this criticism can be found on Sniedovich's web site.

Notes on the art of math modeling

Constraint satisficing vs payoff optimization

Any satisficing problem can be formulated as an optimization problem. To see that this is so, let the objective function of the optimization problem be the indicator function of the constraints pertaining to the satisficing problem. Thus, if our concern is to identify a worst-case scenario pertaining to a constraint, this can be done via a suitable Maximin/Minimax worst-case analysis of the indicator function of the constraint.

This means that the generic decision theoretic models can handle outcomes that are induced by constraint satisficing requirements rather than by say payoff maximization.

In particular, note the equivalence

 r \le f(x) \ \ \longleftrightarrow \ \ 1 \le I(x)

where

 I(x):= \begin{cases}
1 &, \ \  r \le f(x) \\
0 &,\ \ r > f(x)
\end{cases}\ , \ x\in X

and therefore


x\in X, r \le f(x) \ \ \ \longleftrightarrow \ \ \ x=\arg\, \max_{x\in X} I(x)

In practical terms, this means that an antagonistic Nature will aim to select a state that will violate the constraint whereas a sympathetic Nature will aim to select a state that will satisfy the constraint. As for the outcome, the penalty for violating the constraint is such that the decision maker will refrain from selecting a decision that will allow Nature to violate the constraint within the state space pertaining to the selected decision.

The role of "min" and "max"

It should be stressed that the feature according info-gap's robustness model its typical Maximin character is not the presence of both \ \displaystyle \min \ and \ \displaystyle \max \ in the formulation of the info-gap model. Rather, the reason for this is a deeper one. It goes to the heart of the conceptual framework that the Maximin model captures: Nature playing against the DM. This is what is crucial here.

To see that this is so, let us generalize info-gap's robustness model and consider the following modified model instead:


 z(q):= \max\{\alpha: R(q,u) \in C, \forall u \in \mathcal{U}(\alpha,\tilde{u})\}

where in this context \ \displaystyle C \ is some set and \ R\  is some function on \ \displaystyle \mathcal{Q}\times \mathfrak{U} . Note that it is not assumed that \ \displaystyle R \ is a real-valued function. Also note that "min" is absent from this model.

All we need to do to incorporate a min  into this model is to express the constraint


R(q,u) \in C \ , \ \forall u \in \mathcal{U}(\alpha,\tilde{u})

as a worst-case requirement. This is a straightforward task, observing that for any triplet \ \displaystyle  (q,\alpha.u)\ of interest we have


R(q,u) \in C \ \ \ \longleftrightarrow \ \ \ \alpha \le I(q,\alpha,u)

where


I(q,\alpha,u):= \begin{cases}
\quad \alpha &, \ \  R(q,u) \in C\\
-\infty &, \ \ R(q,u) \notin C
\end{cases} \ , \ q\in \mathcal{Q}, u\in \mathcal{U}(\alpha,\tilde{u})

hence,


\begin{array}{ccl}
\max\{\alpha: R(q,u) \in C, \forall u \in \mathcal{U}(\alpha,\tilde{u})\} &=& \max\{\alpha: \alpha \le I(q,\alpha,u), \forall u \in \mathcal{U}(\alpha,\tilde{u})\} \\
&=& \max\{\alpha: \alpha \le\displaystyle  \min_{u \in \mathcal{U}(\alpha,\tilde{u})} I(q,\alpha,u)\}
\end{array}

which, of course, is a Maximin model a la Mathematical Programming.

In short,


\max\{\alpha: R(q,u) \in C, \forall u \in \mathcal{U}(\alpha,\tilde{u})\} = \max_{\alpha\ge 0}\ \min_{u \in \mathcal{U}(\alpha,\tilde{u})} I(q,\alpha,u)\}

Note that although the model on the left does not include an explicit "min", it is nevertheless a typical Maximin model. The feature rendering it a Maximin model is the \ \displaystyle \forall  \ requirement which lends itself to an intuitive worst-case formulation and interpretation.

In fact, the presence of a double "max" in an info-gap robustness model does not necessarily alter the fact that this model is a Maximin model. For instance, consider the robustness model


\max\{\alpha: r_{c}\ge \max_{u\in \mathcal{U}(\alpha,\tilde{u})} R(q,u)\}

This is an instance of the following Maximin model


\max_{\alpha \ge 0} \min_{u\in \mathcal{U}(\alpha,\tilde{u})} \vartheta(q,\alpha,u)

where


\vartheta(q,\alpha,u):= \begin{cases}
\quad \alpha  &, \ \  r_{c} \ge R(q,\alpha)\\
-\infty &,\ \  r_{c} < R(q,\alpha)
\end{cases}

The "inner min" indicates that Nature plays against the DM—the "max" player—hence the model is a robustness model.

The nature of the info-gap/Maximin/Minimin connection

This modeling issue is discussed here because claims have been made that although there is a close relationship between info-gap's robustness and opportuneness models and the generic Maximin and Minimin models, respectively, the description of info-gap as an instance of   these models is too strong. The argument put forward is that although it is true that info-gap's robustness model can be expressed as a Maximin model, the former is not an instance of the latter.

This objection apparently stems from the fact that any optimization problem can be formulated as a Maximin model by a simple employment of dummy  variables. That is, clearly


\min_{x\in X} f(x) = \max_{y\in Y}\min_{x\in X} g(y,x)

where


g(y,x) = f(x) \ , \ \forall x\in X, y\in Y

for any arbitrary non-empty set \ \displaystyle Y \ .

The point of this objection seems to be that we are running the risk of watering down the meaning of the term instance   if we thus contend that any minimization problem is an instance of the Maximin model.

It must therefore be pointed out that this concern is utterly unwarranted in the case of the info-gap/Maximin/Minimin relation. The correspondence between info-gap's robustness model and the generic Maximin model is neither contrived nor is it formulated with the aid of dummy objects. The correspondence is immediate, intuitive, and compelling hence, aptly described by the term instance of .

Specifically, as shown above, info-gap's robustness model is an instance of the generic Maximin model specified by the following constructs:


\begin{array}{rccl}
\text{Decision Space} & D & = & (0,\infty)\\ 
\text{State Spaces} & S(d) & = & \mathcal{U}(d,\tilde{u})\\
\text{Outcomes} & g(d,s) & = & \varphi(q,d,s) 
\end{array}

Furthermore, those objecting to the use of the term instance of   should note that the Maximin model formulated above has an equivalent so called Mathematical Programming   (MP) formulation deriving from the fact that


\begin{array}{ccc}
\text{Classical Maximin Format}&& \text{MP Maximin Format}\\
 \displaystyle \max_{d\in D} \ \min_{s \in S(d)}\ g(d,s) &=&  \displaystyle \max_{d\in D,\alpha \in \mathbb{R}}\{\alpha: \alpha \le  \min_{s\in S(d)} g(d,s)\} 
\end{array}

where \ \mathbb{R} \ denotes the real line.

So here are side by side info-gap's robustness model and the two equivalent formulations of the generic Maximin paradigm:


\begin{array}{c}\textit{Robustness\   Model}
\end{array}
 

\begin{array}{c|c|c}
\text{Info-gap Format}& \text{MP Maximin Format}&\text{Classical Maximin Format}\\
\hline \\[-0.18in]
\displaystyle \max\{\alpha: r_{c} \le \min_{u\in \mathcal{U}(\alpha,\tilde{u})} R(q,u)\}&\displaystyle \max\{\alpha: \alpha \le \min_{u \in \mathcal{U}(\alpha,\tilde{u})}\ \varphi(q,\alpha,u)\}&\displaystyle \max_{\alpha\ge 0} \ \min_{u\in \mathcal{U}(\alpha,\tilde{u})} \varphi(q,\alpha,u)
\end{array}

Note that the equivalence between these three representations of the same decision-making situation makes no use of dummy variables. It is based on the equivalence


r_{c} \le R(q,u)  \longleftrightarrow \alpha \le \varphi(q,\alpha,u)

deriving directly from the definition of the characteristic function \ \displaystyle \varphi \ .

Clearly then, info-gap's robustness model is an instance of the generic Maximin model.

Similarly, for info-gap's opportuneness model we have


\begin{array}{c}\textit{Opportuneness\   Model}
\end{array}
 

\begin{array}{c|c|c}
\text{Info-gap Format}& \text{MP Minimin Format}&\text{Classical Minimin Format}\\
\hline \\[-0.18in]
\displaystyle \min\{\alpha: r_{w} \le \max_{u\in \mathcal{U}(\alpha,\tilde{u})} R(q,u)\} & \displaystyle \min\{\alpha: \alpha \ge \min_{u \in \mathcal{U}(\alpha,\tilde{u})}\ \psi(q,\alpha,u)\} & \displaystyle \min_{\alpha\ge 0} \ \min_{u\in \mathcal{U}(\alpha,\tilde{u})} \psi(q,\alpha,u)
\end{array}

Again, it should be stressed that the equivalence between these three representations of the same decision-making situation makes no use of dummy variables. It is based on the equivalence


r_{c} \le R(q,u)  \longleftrightarrow \alpha \ge \psi(q,\alpha,u)

deriving directly from the definition of the characteristic function \ \displaystyle \psi \ .

Thus, to "help" the DM minimize \ \displaystyle \alpha \ , a sympathetic Nature will select a u \in \mathcal{U}(\alpha,\tilde{u})\ that minimizes \ \psi(q,\alpha,u) \ over \ \displaystyle  \mathcal{U}(\alpha,\tilde{u})\ .

Clearly, info-gap's opportuneness model is an instance of the generic Minimin model.

Other formulations

There are of course other valid representations of the robustness/opportuneness models. For instance, in the case of the robustness model, the outcomes can be defined as follows (Sniedovich 2007[70]) :


g(\alpha,u):= \alpha \cdot \left(r_{c} \preceq R(q,u)\right)

where the binary operation \ \ \preceq \ \ is defined as follows:


 a \preceq b�:= \begin{cases}
1 &, \ \ a\le b \\
0 &,\ \  a>b
\end{cases}

The corresponding MP format of the Maximin model would then be as follows:


\max\{\alpha: \alpha \le \min_{u\in \mathcal{U}(\alpha,\tilde{u})} \alpha \cdot \left(r_{c} \preceq R(q,u)\right) \} = \max\{\alpha: 1 \le \min_{u\in \mathcal{U}(\alpha,\tilde{u})} \left(r_{c} \preceq R(q,u)\right)\}

In words, to maximize the robustness, the DM selects the largest value of \ \alpha \ such that the performance constraint \ r_{c} \le R(q,u) \ is satisfied by all \ u\in \mathcal{U}(\alpha,\tilde{u})\ . In plain language: the DM selects the largest value of \ \displaystyle \alpha \ whose worst outcome in the region of uncertainty of size \ \displaystyle \alpha \ satisfies the performance requirement.

Simplifications

As a rule the classical Maximin formulations are not particularly useful when it comes to solving the problems they represent, as no "general purpose" Maximin solver is available (Rustem and Howe 2002[60]).

It is common practice therefore to simplify the classical formulation with a view to derive a formulation that would be readily amenable to solution. This is a problem-specific task which involves exploiting a problem's specific features. The mathematical programming format of Maximin is often more user-friendly in this regard.

The best example is of course the classical Maximin model of 2-person zero-sum games which after streamlining is reduced to a standard linear programming model (Thie 1988,[72] pp. 314–317) that is readily solved by linear programming algorithms.

To reiterate, this linear programming model is an instance of the generic Maximin model obtained via simplification of the classical Maximin formulation of the 2-person zero-sum game.

Another example is dynamic programming where the Maximin paradigm is incorporated in the dynamic programming functional equation representing sequential decision processes that are subject to severe uncertainty (e.g. Sniedovich 2003[73][74]).

Summary

Recall that in plain language the Maximin paradigm maintains the following:

Maximin Rule
The maximin rule tells us to rank alternatives by their worst possible outcomes: we are to adopt the alternative the worst outcome of which is superior to the worst outcome of the others.
Rawls (1971, p. 152)

Info-gap's robustness model is a simple instance of this paradigm that is characterized by a specific decision space, state spaces and objective function, as discussed above.

Much can be gained by viewing info-gap's theory in this light.

See also

External links

Notes

  1. ^ Here are some examples: In many fields, including engineering, economics, management, biological conservation, medicine, homeland security, and more, analysts use models and data to evaluate and formulate decisions. An info-gap is the disparity between what is known and what needs to be known in order to make a reliable and responsible decision. Info-gaps are Knightian uncertainties: a lack of knowledge, an incompleteness of understanding. Info-gaps are non-probabilistic and cannot be insured against or modelled probabilistically. A common info-gap, though not the only kind, is uncertainty in the value of a parameter or of a vector of parameters, such as the durability of a new material or the future rates or return on stocks. Another common info-gap is uncertainty in the shape of a probability distribution. Another info-gap is uncertainty in the functional form of a property of the system, such as friction force in engineering, or the Phillips curve in economics. Another info-gap is in the shape and size of a set of possible vectors or functions. For instance, one may have very little knowledge about the relevant set of cardiac waveforms at the onset of heart failure in a specific individual.

References

  1. ^ Yakov Ben-Haim, Information-Gap Theory: Decisions Under Severe Uncertainty, Academic Press, London, 2001.
  2. ^ Yakov Ben-Haim, Info-Gap Theory: Decisions Under Severe Uncertainty, 2nd edition, Academic Press, London, 2006.
  3. ^ a b c Sniedovich, M. (2010). "A bird's view of info-gap decision theory". Journal of Risk Finance 11 (3): 268–283. doi:10.1108/15265941011043648. 
  4. ^ How Did Info-Gap Theory Start? How Does it Grow?
  5. ^ a b Yakov Ben-Haim, Robust Reliability in the Mechanical Science, Springer, Berlin ,1996.
  6. ^ Hipel, Keith W.; Ben-Haim, Yakov (1999). "Decision making in an uncertain world: Information-gap modelling in water resources management". IEEE Trans., Systems, Man and Cybernetics 29 (4): 506–517. doi:10.1109/5326.798765. 
  7. ^ a b c Yakov Ben-Haim, 2005, Info-gap Decision Theory For Engineering Design. Or: Why `Good' is Preferable to `Best', appearing as chapter 11 in Engineering Design Reliability Handbook, Edited by Efstratios Nikolaidis, Dan M.Ghiocel and Surendra Singhal, CRC Press, Boca Raton.
  8. ^ a b Kanno, Y.; Takewaki, I. (2006). "Robustness analysis of trusses with separable load and structural uncertainties". International Journal of Solids and Structures 43 (9): 2646–2669. doi:10.1016/j.ijsolstr.2005.06.088. 
  9. ^ a b Kaihong Wang, 2005, Vibration Analysis of Cracked Composite Bending-torsion Beams for Damage Diagnosis, PhD thesis, Virginia Politechnic Institute, Blacksburg, Virginia.
  10. ^ a b Kanno, Y.; Takewaki, I. (2006). "Sequential semidefinite program for maximum robustness design of structures under load uncertainty". Journal of Optimization Theory and Applications 130 (2): 265–287. doi:10.1007/s10957-006-9102-z. 
  11. ^ a b Pierce, S.G.; Worden, K.; Manson, G. (2006). "A novel information-gap technique to assess reliability of neural network-based damage detection". Journal of Sound and Vibration 293 (1–2): 96–111. doi:10.1016/j.jsv.2005.09.029. 
  12. ^ Pierce, Gareth; Ben-Haim, Yakov; Worden, Keith; Manson, Graeme (2006). "Evaluation of neural network robust reliability using information-gap theory". IEEE Transactions on Neural Networks 17 (6): 1349–1361. doi:10.1109/TNN.2006.880363. PMID 17131652. 
  13. ^ a b Chetwynd, D.; Worden, K.; Manson, G. (2074). "An application of interval-valued neural networks to a regression problem". Proceedings of the Royal Society - Mathematical, Physical and Engineering Sciences 462: 3097–3114. 
  14. ^ Lim, D.; Ong, Y. S.; Jin, Y.; Sendhoff, B.; Lee, B. S. (2006). "Inverse Multi-objective Robust Evolutionary Design". Genetic Programming and Evolvable Machines 7 (4): 383–404. doi:10.1007/s10710-006-9013-7. 
  15. ^ Vinot, P.; Cogan, S.; Cipolla, V. (2005). "A robust model-based test planning procedure". Journal of Sound and Vibration 288 (3): 571–585. doi:10.1016/j.jsv.2005.07.007. 
  16. ^ a b c Takewaki, Izuru; Ben-Haim, Yakov (2005). "Info-gap robust design with load and model uncertainties". Journal of Sound and Vibration 288 (3): 551–570. doi:10.1016/j.jsv.2005.07.005. 
  17. ^ Izuru Takewaki and Yakov Ben-Haim, 2007, Info-gap robust design of passively controlled structures with load and model uncertainties, Structural Design Optimization Considering Uncertainties, Yiannis Tsompanakis, Nikkos D. Lagaros and Manolis Papadrakakis, editors, Taylor and Francis Publishers.
  18. ^ Hemez, Francois M.; Ben-Haim, Yakov (2004). "Info-gap robustness for the correlation of tests and simulations of a nonlinear transient". Mechanical Systems and Signal Processing 18 (6): 1443–1467. doi:10.1016/j.ymssp.2004.03.001. 
  19. ^ a b Levy, Jason K.; Hipel, Keith W.; Kilgour, Marc (2000). "Using environmental indicators to quantify the robustness of policy alternatives to uncertainty". Ecological Modelling 130 (1–3): 79–86. doi:10.1016/S0304-3800(00)00226-X. 
  20. ^ Moilanen, A.; Wintle, B.A. (2006). "Uncertainty analysis favours selection of spatially aggregated reserve structures". Biological Conservation 129 (3): 427–434. doi:10.1016/j.biocon.2005.11.006. 
  21. ^ Halpern, Benjamin S.; Regan, Helen M.; Possingham, Hugh P.; McCarthy, Michael A. (2006). "Accounting for uncertainty in marine reserve design". Ecology Letters 9 (1): 2–11. doi:10.1111/j.1461-0248.2005.00827.x. PMID 16958861. 
  22. ^ Regan, Helen M.; Ben-Haim, Yakov; Langford, Bill; Wilson, Will G.; Lundberg, Per; Andelman, Sandy J.; Burgman, Mark A. (2005). "Robust decision making under severe uncertainty for conservation management". Ecological Applications 15 (4): 1471–1477. doi:10.1890/03-5419. 
  23. ^ McCarthy, M.A.; Lindenmayer, D.B. (2007). "Info-gap decision theory for assessing the management of catchments for timber production and urban water supply". Environmental Management 39 (4): 553–562. doi:10.1007/s00267-006-0022-3. PMID 17318697. 
  24. ^ Crone, Elizabeth E.; Pickering, Debbie; Schultz, Cheryl B. (2007). "Can captive rearing promote recovery of endangered butterflies? An assessment in the face of uncertainty". Biological Conservation 139 (1–2): 103–112. doi:10.1016/j.biocon.2007.06.007. 
  25. ^ L. Joe Moffitt, John K. Stranlund and Craig D. Osteen, 2007, Robust detection protocols for uncertain introductions of invasive species, Journal of Environmental Management, In Press, Corrected Proof, Available online 27 August 2007.
  26. ^ Burgman, M. A.; Lindenmayer, D.B.; Elith, J. (2005). "Managing landscapes for conservation under uncertainty". Ecology 86 (8): 2007–2017. doi:10.1890/04-0906. 
  27. ^ Moilanen, A.; Elith, J.; Burgman, M.; Burgman, M (2006). "Uncertainty analysis for regional-scale reserve selection". Conservation Biology 20 (6): 1688–1697. doi:10.1111/j.1523-1739.2006.00560.x. PMID 17181804. 
  28. ^ Moilanen, Atte; Runge, Michael C.; Elith, Jane; Tyre, Andrew; Carmel, Yohay; Fegraus, Eric; Wintle, Brendan; Burgman, Mark et al. (2006). "Planning for robust reserve networks using uncertainty analysis". Ecological Modelling 199 (1): 115–124. doi:10.1016/j.ecolmodel.2006.07.004. 
  29. ^ Nicholson, Emily; Possingham, Hugh P. (2007). "Making conservation decisions under uncertainty for the persistence of multiple species". Ecological Applications 17 (1): 251–265. doi:10.1890/1051-0761(2007)017[0251:MCDUUF]2.0.CO;2. PMID 17479849. 
  30. ^ Burgman, Mark, 2005, Risks and Decisions for Conservation and Environmental Management, Cambridge University Press, Cambridge.
  31. ^ Carmel, Yohay; Ben-Haim, Yakov (2005). "Info-gap robust-satisficing model of foraging behavior: Do foragers optimize or satisfice?". American Naturalist 166 (5): 633–641. doi:10.1086/491691. PMID 16224728. 
  32. ^ Moffitt, Joe; Stranlund, John K.; Field, Barry C. (2005). "Inspections to Avert Terrorism: Robustness Under Severe Uncertainty". Journal of Homeland Security and Emergency Management 2 (3): 3. doi:10.2202/1547-7355.1134. http://www.bepress.com/jhsem/vol2/iss3/3. 
  33. ^ a b Beresford-Smith, Bryan; Thompson, Colin J. (2007). "Managing credit risk with info-gap uncertainty". The Journal of Risk Finance 8 (1): 24–34. doi:10.1108/15265940710721055. 
  34. ^ John K. Stranlund and Yakov Ben-Haim, (2007), Price-based vs. quantity-based environmental regulation under Knightian uncertainty: An info-gap robust satisficing perspective, Journal of Environmental Management, In Press, Corrected Proof, Available online 28 March 2007.
  35. ^ a b c d Ben-Haim, Yakov (2005). "Value at risk with Info-gap uncertainty". Journal of Risk Finance 6 (5): 388–403. doi:10.1108/15265940510633460. 
  36. ^ Ben-Haim, Yakov; Laufer, Alexander (1998). "Robust reliability of projects with activity-duration uncertainty". ASCE Journal of Construction Engineering and Management 124 (2): 125–132. doi:10.1061/(ASCE)0733-9364(1998)124:2(125). 
  37. ^ a b c d Tahan, Meir; Ben-Asher, Joseph Z. (2005). "Modeling and analysis of integration processes for engineering systems". Systems Engineering 8 (1): 62–77. doi:10.1002/sys.20021. 
  38. ^ Regev, Sary; Shtub, Avraham; Ben-Haim, Yakov (2006). "Managing project risks as knowledge gaps". Project Management Journal 37 (5): 17–25. 
  39. ^ Fox, D.R.; Ben-Haim, Y.; Hayes, K.R.; McCarthy, M.; Wintle, B.; Dunstan, P. (2007). "An Info-Gap Approach to Power and Sample-size calculations". Environmetrics 18 (2): 189–203. doi:10.1002/env.811. 
  40. ^ Ben-Haim, Yakov (1994). "Convex models of uncertainty: Applications and Implications". Erkenntnis: an International Journal of Analytic Philosophy 41 (2): 139–156. doi:10.1007/BF01128824. 
  41. ^ Ben-Haim, Yakov (1999). "Set-models of information-gap uncertainty: Axioms and an inference scheme". Journal of the Franklin Institute 336 (7): 1093–1117. doi:10.1016/S0016-0032(99)00024-1. 
  42. ^ Ben-Haim, Yakov (2000). "Robust rationality and decisions under severe uncertainty". Journal of the Franklin Institute 337 (2–3): 171–199. doi:10.1016/S0016-0032(00)00016-8. 
  43. ^ Ben-Haim, Yakov (2004). "Uncertainty, probability and information-gaps". Reliability Engineering and System Safety 85: 249–266. doi:10.1016/j.ress.2004.03.015. 
  44. ^ George J. Klir, 2006, Uncertainty and Information: Foundations of Generalized Information Theory, Wiley Publishers.
  45. ^ Yakov Ben-Haim, 2007, Peirce, Haack and Info-gaps, in Susan Haack, A Lady of Distinctions: The Philosopher Responds to Her Critics, edited by Cornelis de Waal, Prometheus Books.
  46. ^ Burgman, Mark, 2005, Risks and Decisions for Conservation and Environmental Management, Cambridge University Press, Cambridge, pp.399.
  47. ^ a b c d e Sniedovich, M. (2007). "The art and science of modeling decision-making under severe uncertainty". Decision-Making in Manufacturing and Services 1 (1–2): 109–134. 
  48. ^ Simon, Herbert A. (1959). "Theories of decision making in economics and behavioral science". American Economic Review 49: 253–283. 
  49. ^ Schwartz, Barry, 2004, Paradox of Choice: Why More Is Less, Harper Perennial.
  50. ^ Conlisk, John (1996). "Why bounded rationality?". Journal of Economic Literature XXXIV: 669–700. 
  51. ^ Burgman, Mark, 2005, Risks and Decisions for Conservation and Environmental Management, Cambridge University Press, Cambridge, pp.391, 394.
  52. ^ a b Vinot, P.; Cogan, S.; Cipolla, V. (2005). "A robust model-based test planning procedure". Journal of Sound and Vibration 288 (3): 572. 
  53. ^ a b Z. Ben-Haim and Y. C. Eldar, Maximum set estimators with bounded estimation error, IEEE Trans. Signal Processing, vol. 53, no. 8, August 2005, pp. 3172-3182.
  54. ^ Babuška, I., F. Nobile and R. Tempone, 2005, Worst case scenario analysis for elliptic problems with uncertainty, Numerische Mathematik (in English) vol.101 pp.185–219.
  55. ^ Ben-Haim, Yakov; Cogan, Scott; Sanseigne, Laetitia (1998). "Usability of Mathematical Models in Mechanical Decision Processes". Mechanical Systems and Signal Processing 12: 121–134. doi:10.1006/mssp.1996.0137. 
  56. ^ (See also chapter 4 in Yakov Ben-Haim, Ref. 2.)
  57. ^ Rosenhead, M.J.; Elton, M.; Gupta, S.K. (1972). "Robustness and Optimality as Criteria for Strategic Decisions". Operational Research Quarterly 23 (4): 413–430. doi:10.1057/jors.1972.72. 
  58. ^ Rosenblatt, M.J.; Lee, H.L. (1987). "A robustness approach to facilities design". International Journal of Production Research 25 (4): 479–486. doi:10.1080/00207548708919855. 
  59. ^ a b P. Kouvelis and G. Yu, 1997, Robust Discrete Optimization and Its Applications, Kluwer.
  60. ^ a b B. Rustem and M. Howe, 2002, Algorithms for Worst-case Design and Applications to Risk Management, Princeton University Press.
  61. ^ R.J. Lempert, S.W. Popper, and S.C. Bankes, 2003, Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis, The Rand Corporation.
  62. ^ A. Ben-Tal, L. El Ghaoui, and A. Nemirovski, 2006, Mathematical Programming, Special issue on Robust Optimization, Volume 107(1-2).
  63. ^ a b c d Resnik, M.D., Choices: an Introduction to Decision Theory, University of Minnesota Press, Minneapolis, MN, 1987.
  64. ^ a b c French, S.D., Decision Theory, Ellis Horwood, 1988.
  65. ^ Rawls, J. Theory of Justice, 1971, Belknap Press, Cambridge, MA.
  66. ^ James O Berger (2006; really 1985). Statistical decision theory and Bayesian analysis (Second ed.). New York: Springer Science + Business Media. ISBN 0-387-96098-8. http://books.google.com/?id=oY_x7dE15_AC&pg=PA100&dq=isbn=0387960988#PPA331,M1. 
  67. ^ Tintner, G. (1952). "Abraham Wald's contributions to econometrics". The Annals of Mathematical Statistics 23 (1): 21–28. doi:10.1214/aoms/1177729482. 
  68. ^ Babuška, I.; Nobile, F.; Tempone, R. (2005). "Worst case scenario analysis for elliptic problems with uncertainty". Numerische Mathematik 101 (2): 185–219. doi:10.1007/s00211-005-0601-x. 
  69. ^ Ben-Haim, Y. (1999). "Design certification with information-gap uncertainty". Structural Safety 2: 269–289. 
  70. ^ a b Sniedovich, M. (2007). "The art and science of modeling decision-making under severe uncertainty". Decision-Making in Manufacturing and Services 1 (1–2): 111–136. http://www.dmms.agh.edu.pl/Volume_1_2/Sniedovich.pdf. 
  71. ^ Ecker J.G. and Kupferschmid, M., Introduction to Operations Research, Wiley, 1988.
  72. ^ a b Thie, P., An Introduction to Linear Programming and Game Theory, Wiley, NY, 1988.
  73. ^ Sniedovich, M. (2003). "OR/MS Games: 3. The Counterfeit coin problem". INFORMS Transactions in Education 3 (2): 32–41. doi:10.1287/ited.3.2.32. 
  74. ^ Sniedovich, M. (2003). "OR/MS Games: 4. The joy of egg-dropping in Braunschweig and Hong Kong". INFORMS Transactions on Education 4 (1): 48–64. doi:10.1287/ited.4.1.48.